Test Report: Docker_Linux_containerd_arm64 22182

                    
                      d8910aedaf59f4b051fab9f3c680e262e7105014:2025-12-17:42820
                    
                

Test fail (34/417)

Order failed test Duration
171 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy 501.29
173 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart 368.15
175 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods 2.43
185 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd 2.23
186 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly 2.27
187 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig 733.53
188 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth 2.12
191 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd 1.71
197 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd 3.19
201 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect 2.41
203 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim 241.65
213 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels 1.42
219 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel 0.57
222 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup 0.11
223 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect 117.62
228 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp 0.06
229 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List 0.26
230 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput 0.27
231 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS 0.27
232 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format 0.27
233 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL 0.25
237 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port 2.57
358 TestKubernetesUpgrade 798.36
413 TestStartStop/group/no-preload/serial/FirstStart 514.03
437 TestStartStop/group/newest-cni/serial/FirstStart 501.36
438 TestStartStop/group/no-preload/serial/DeployApp 2.97
439 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 112.9
442 TestStartStop/group/no-preload/serial/SecondStart 370.62
444 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 107.69
447 TestStartStop/group/newest-cni/serial/SecondStart 375.17
448 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 541.94
452 TestStartStop/group/newest-cni/serial/Pause 9.62
468 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 257.85
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (501.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-232588 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
E1217 10:32:12.071102 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:34:28.205783 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:34:55.914676 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:35:43.084651 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:35:43.091421 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:35:43.102926 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:35:43.124389 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:35:43.165844 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:35:43.247408 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:35:43.408950 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:35:43.730706 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:35:44.372858 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:35:45.654532 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:35:48.216742 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:35:53.338679 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:36:03.580964 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:36:24.062666 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:37:05.024572 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:38:26.948589 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:39:28.205599 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-232588 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 109 (8m19.831475025s)

                                                
                                                
-- stdout --
	* [functional-232588] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-232588" primary control-plane node in "functional-232588" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Found network options:
	  - HTTP_PROXY=localhost:36263
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:36263 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-232588 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-232588 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000105065s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001251054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001251054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-232588 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-232588
helpers_test.go:244: (dbg) docker inspect functional-232588:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55",
	        "Created": "2025-12-17T10:31:38.417629873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2962990,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T10:31:38.484538313Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/hostname",
	        "HostsPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/hosts",
	        "LogPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55-json.log",
	        "Name": "/functional-232588",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-232588:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-232588",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55",
	                "LowerDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813-init/diff:/var/lib/docker/overlay2/aa1c3cb837db05afa9c265c464cc269fa9c11658f422c1c8858e1287ac952f12/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-232588",
	                "Source": "/var/lib/docker/volumes/functional-232588/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-232588",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-232588",
	                "name.minikube.sigs.k8s.io": "functional-232588",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cf91fdea6bf1c59282af014fad74b29d2456698ebca9b6be8c9685054b7d7df4",
	            "SandboxKey": "/var/run/docker/netns/cf91fdea6bf1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35733"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35734"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35737"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35735"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35736"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-232588": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:06:f9:5f:98:3e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf7a63e100f8f97f0cb760b53960a5eaeb1f5054bead79442486fc1d51c01ab7",
	                    "EndpointID": "a3f4d6de946fb68269c7790dce129934f895a840ec5cebbe87fc0d49cb575c44",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-232588",
	                        "f67a3fa8da99"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-232588 -n functional-232588
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232588 -n functional-232588: exit status 6 (308.184854ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 10:39:53.704401 2968083 status.go:458] kubeconfig endpoint: get endpoint: "functional-232588" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-626013 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ ssh            │ functional-626013 ssh sudo cat /etc/ssl/certs/29245742.pem                                                                                                      │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image ls                                                                                                                                      │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ ssh            │ functional-626013 ssh sudo cat /usr/share/ca-certificates/29245742.pem                                                                                          │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image load --daemon kicbase/echo-server:functional-626013 --alsologtostderr                                                                   │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ ssh            │ functional-626013 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image ls                                                                                                                                      │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image save kicbase/echo-server:functional-626013 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image rm kicbase/echo-server:functional-626013 --alsologtostderr                                                                              │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image ls                                                                                                                                      │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ update-context │ functional-626013 update-context --alsologtostderr -v=2                                                                                                         │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ update-context │ functional-626013 update-context --alsologtostderr -v=2                                                                                                         │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ update-context │ functional-626013 update-context --alsologtostderr -v=2                                                                                                         │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image ls                                                                                                                                      │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image save --daemon kicbase/echo-server:functional-626013 --alsologtostderr                                                                   │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image ls --format short --alsologtostderr                                                                                                     │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image ls --format yaml --alsologtostderr                                                                                                      │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ ssh            │ functional-626013 ssh pgrep buildkitd                                                                                                                           │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │                     │
	│ image          │ functional-626013 image ls --format json --alsologtostderr                                                                                                      │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image ls --format table --alsologtostderr                                                                                                     │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image build -t localhost/my-image:functional-626013 testdata/build --alsologtostderr                                                          │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image ls                                                                                                                                      │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ delete         │ -p functional-626013                                                                                                                                            │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ start          │ -p functional-232588 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1           │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 10:31:33
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 10:31:33.598246 2962598 out.go:360] Setting OutFile to fd 1 ...
	I1217 10:31:33.598364 2962598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:31:33.598368 2962598 out.go:374] Setting ErrFile to fd 2...
	I1217 10:31:33.598371 2962598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:31:33.598613 2962598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 10:31:33.599029 2962598 out.go:368] Setting JSON to false
	I1217 10:31:33.599862 2962598 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":58444,"bootTime":1765909050,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 10:31:33.599922 2962598 start.go:143] virtualization:  
	I1217 10:31:33.604102 2962598 out.go:179] * [functional-232588] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 10:31:33.608458 2962598 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 10:31:33.608568 2962598 notify.go:221] Checking for updates...
	I1217 10:31:33.615460 2962598 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 10:31:33.618535 2962598 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:31:33.621687 2962598 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 10:31:33.624781 2962598 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 10:31:33.627732 2962598 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 10:31:33.630855 2962598 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 10:31:33.659354 2962598 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 10:31:33.659494 2962598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:31:33.718316 2962598 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-17 10:31:33.708911822 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:31:33.718410 2962598 docker.go:319] overlay module found
	I1217 10:31:33.721733 2962598 out.go:179] * Using the docker driver based on user configuration
	I1217 10:31:33.724633 2962598 start.go:309] selected driver: docker
	I1217 10:31:33.724641 2962598 start.go:927] validating driver "docker" against <nil>
	I1217 10:31:33.724673 2962598 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 10:31:33.725398 2962598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:31:33.778807 2962598 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-17 10:31:33.769884695 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:31:33.778949 2962598 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 10:31:33.779161 2962598 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 10:31:33.782197 2962598 out.go:179] * Using Docker driver with root privileges
	I1217 10:31:33.785123 2962598 cni.go:84] Creating CNI manager for ""
	I1217 10:31:33.785183 2962598 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 10:31:33.785190 2962598 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 10:31:33.785314 2962598 start.go:353] cluster config:
	{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:31:33.788604 2962598 out.go:179] * Starting "functional-232588" primary control-plane node in "functional-232588" cluster
	I1217 10:31:33.791438 2962598 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 10:31:33.794332 2962598 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 10:31:33.797354 2962598 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 10:31:33.797355 2962598 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 10:31:33.797401 2962598 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1217 10:31:33.797425 2962598 cache.go:65] Caching tarball of preloaded images
	I1217 10:31:33.797512 2962598 preload.go:238] Found /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 10:31:33.797521 2962598 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1217 10:31:33.797859 2962598 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/config.json ...
	I1217 10:31:33.797876 2962598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/config.json: {Name:mk49253aa6bfdc09f9bf70cb1e55f0e79c85a4b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:31:33.816497 2962598 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 10:31:33.816514 2962598 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 10:31:33.816533 2962598 cache.go:243] Successfully downloaded all kic artifacts
	I1217 10:31:33.816562 2962598 start.go:360] acquireMachinesLock for functional-232588: {Name:mkb7828f32963a62377c74058da795e63eb677f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 10:31:33.816671 2962598 start.go:364] duration metric: took 95.407µs to acquireMachinesLock for "functional-232588"
	I1217 10:31:33.816695 2962598 start.go:93] Provisioning new machine with config: &{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 10:31:33.816763 2962598 start.go:125] createHost starting for "" (driver="docker")
	I1217 10:31:33.820275 2962598 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1217 10:31:33.820609 2962598 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:36263 to docker env.
	I1217 10:31:33.820636 2962598 start.go:159] libmachine.API.Create for "functional-232588" (driver="docker")
	I1217 10:31:33.820656 2962598 client.go:173] LocalClient.Create starting
	I1217 10:31:33.820725 2962598 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem
	I1217 10:31:33.820773 2962598 main.go:143] libmachine: Decoding PEM data...
	I1217 10:31:33.820790 2962598 main.go:143] libmachine: Parsing certificate...
	I1217 10:31:33.820839 2962598 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem
	I1217 10:31:33.820861 2962598 main.go:143] libmachine: Decoding PEM data...
	I1217 10:31:33.820872 2962598 main.go:143] libmachine: Parsing certificate...
	I1217 10:31:33.821219 2962598 cli_runner.go:164] Run: docker network inspect functional-232588 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 10:31:33.837120 2962598 cli_runner.go:211] docker network inspect functional-232588 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 10:31:33.837204 2962598 network_create.go:284] running [docker network inspect functional-232588] to gather additional debugging logs...
	I1217 10:31:33.837220 2962598 cli_runner.go:164] Run: docker network inspect functional-232588
	W1217 10:31:33.854038 2962598 cli_runner.go:211] docker network inspect functional-232588 returned with exit code 1
	I1217 10:31:33.854057 2962598 network_create.go:287] error running [docker network inspect functional-232588]: docker network inspect functional-232588: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-232588 not found
	I1217 10:31:33.854069 2962598 network_create.go:289] output of [docker network inspect functional-232588]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-232588 not found
	
	** /stderr **
	I1217 10:31:33.854164 2962598 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 10:31:33.870707 2962598 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018c72a0}
	I1217 10:31:33.870741 2962598 network_create.go:124] attempt to create docker network functional-232588 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1217 10:31:33.870800 2962598 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-232588 functional-232588
	I1217 10:31:33.940249 2962598 network_create.go:108] docker network functional-232588 192.168.49.0/24 created
	I1217 10:31:33.940271 2962598 kic.go:121] calculated static IP "192.168.49.2" for the "functional-232588" container
	I1217 10:31:33.940342 2962598 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 10:31:33.956273 2962598 cli_runner.go:164] Run: docker volume create functional-232588 --label name.minikube.sigs.k8s.io=functional-232588 --label created_by.minikube.sigs.k8s.io=true
	I1217 10:31:33.974428 2962598 oci.go:103] Successfully created a docker volume functional-232588
	I1217 10:31:33.974503 2962598 cli_runner.go:164] Run: docker run --rm --name functional-232588-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-232588 --entrypoint /usr/bin/test -v functional-232588:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 10:31:34.510953 2962598 oci.go:107] Successfully prepared a docker volume functional-232588
	I1217 10:31:34.511025 2962598 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 10:31:34.511033 2962598 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 10:31:34.511101 2962598 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-232588:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 10:31:38.349672 2962598 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-232588:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.838536036s)
	I1217 10:31:38.349694 2962598 kic.go:203] duration metric: took 3.838656878s to extract preloaded images to volume ...
	W1217 10:31:38.349846 2962598 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 10:31:38.349963 2962598 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 10:31:38.401654 2962598 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-232588 --name functional-232588 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-232588 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-232588 --network functional-232588 --ip 192.168.49.2 --volume functional-232588:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 10:31:38.702497 2962598 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Running}}
	I1217 10:31:38.728034 2962598 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:31:38.753643 2962598 cli_runner.go:164] Run: docker exec functional-232588 stat /var/lib/dpkg/alternatives/iptables
	I1217 10:31:38.805029 2962598 oci.go:144] the created container "functional-232588" has a running status.
	I1217 10:31:38.805083 2962598 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519...
	I1217 10:31:38.809311 2962598 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519.pub --> /home/docker/.ssh/authorized_keys (81 bytes)
	I1217 10:31:38.833921 2962598 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:31:38.859734 2962598 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 10:31:38.859751 2962598 kic_runner.go:114] Args: [docker exec --privileged functional-232588 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 10:31:38.910033 2962598 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:31:38.929491 2962598 machine.go:94] provisionDockerMachine start ...
	I1217 10:31:38.929580 2962598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:31:38.955585 2962598 main.go:143] libmachine: Using SSH client type: native
	I1217 10:31:38.955707 2962598 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:31:38.955713 2962598 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 10:31:38.957049 2962598 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56616->127.0.0.1:35733: read: connection reset by peer
	I1217 10:31:42.097414 2962598 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232588
	
	I1217 10:31:42.097431 2962598 ubuntu.go:182] provisioning hostname "functional-232588"
	I1217 10:31:42.097521 2962598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:31:42.129452 2962598 main.go:143] libmachine: Using SSH client type: native
	I1217 10:31:42.129563 2962598 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:31:42.129571 2962598 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-232588 && echo "functional-232588" | sudo tee /etc/hostname
	I1217 10:31:42.288599 2962598 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232588
	
	I1217 10:31:42.288685 2962598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:31:42.308520 2962598 main.go:143] libmachine: Using SSH client type: native
	I1217 10:31:42.308637 2962598 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:31:42.308651 2962598 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-232588' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-232588/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-232588' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 10:31:42.444695 2962598 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 10:31:42.444712 2962598 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 10:31:42.444739 2962598 ubuntu.go:190] setting up certificates
	I1217 10:31:42.444751 2962598 provision.go:84] configureAuth start
	I1217 10:31:42.444811 2962598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232588
	I1217 10:31:42.460988 2962598 provision.go:143] copyHostCerts
	I1217 10:31:42.461050 2962598 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 10:31:42.461058 2962598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 10:31:42.461137 2962598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 10:31:42.461232 2962598 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 10:31:42.461236 2962598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 10:31:42.461263 2962598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 10:31:42.461362 2962598 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 10:31:42.461365 2962598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 10:31:42.461389 2962598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 10:31:42.461435 2962598 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.functional-232588 san=[127.0.0.1 192.168.49.2 functional-232588 localhost minikube]
	I1217 10:31:42.655813 2962598 provision.go:177] copyRemoteCerts
	I1217 10:31:42.655884 2962598 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 10:31:42.655927 2962598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:31:42.673116 2962598 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:31:42.768171 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 10:31:42.785809 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 10:31:42.805122 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 10:31:42.823296 2962598 provision.go:87] duration metric: took 378.520339ms to configureAuth
	I1217 10:31:42.823314 2962598 ubuntu.go:206] setting minikube options for container-runtime
	I1217 10:31:42.823506 2962598 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 10:31:42.823513 2962598 machine.go:97] duration metric: took 3.894012436s to provisionDockerMachine
	I1217 10:31:42.823519 2962598 client.go:176] duration metric: took 9.002859063s to LocalClient.Create
	I1217 10:31:42.823544 2962598 start.go:167] duration metric: took 9.002908243s to libmachine.API.Create "functional-232588"
	I1217 10:31:42.823559 2962598 start.go:293] postStartSetup for "functional-232588" (driver="docker")
	I1217 10:31:42.823568 2962598 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 10:31:42.823618 2962598 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 10:31:42.823655 2962598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:31:42.844318 2962598 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:31:42.940976 2962598 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 10:31:42.944626 2962598 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 10:31:42.944645 2962598 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 10:31:42.944655 2962598 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 10:31:42.944710 2962598 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 10:31:42.944794 2962598 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 10:31:42.944871 2962598 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts -> hosts in /etc/test/nested/copy/2924574
	I1217 10:31:42.944920 2962598 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2924574
	I1217 10:31:42.952861 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 10:31:42.970783 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts --> /etc/test/nested/copy/2924574/hosts (40 bytes)
	I1217 10:31:42.988219 2962598 start.go:296] duration metric: took 164.646335ms for postStartSetup
	I1217 10:31:42.988683 2962598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232588
	I1217 10:31:43.007531 2962598 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/config.json ...
	I1217 10:31:43.007841 2962598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 10:31:43.007897 2962598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:31:43.032717 2962598 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:31:43.125996 2962598 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 10:31:43.131195 2962598 start.go:128] duration metric: took 9.31441827s to createHost
	I1217 10:31:43.131210 2962598 start.go:83] releasing machines lock for "functional-232588", held for 9.314533254s
	I1217 10:31:43.131284 2962598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232588
	I1217 10:31:43.155749 2962598 out.go:179] * Found network options:
	I1217 10:31:43.158742 2962598 out.go:179]   - HTTP_PROXY=localhost:36263
	W1217 10:31:43.161595 2962598 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1217 10:31:43.164541 2962598 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1217 10:31:43.167461 2962598 ssh_runner.go:195] Run: cat /version.json
	I1217 10:31:43.167508 2962598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:31:43.167547 2962598 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 10:31:43.167597 2962598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:31:43.186927 2962598 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:31:43.198055 2962598 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:31:43.280069 2962598 ssh_runner.go:195] Run: systemctl --version
	I1217 10:31:43.369873 2962598 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 10:31:43.374434 2962598 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 10:31:43.374494 2962598 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 10:31:43.400564 2962598 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 10:31:43.400577 2962598 start.go:496] detecting cgroup driver to use...
	I1217 10:31:43.400608 2962598 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 10:31:43.400664 2962598 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 10:31:43.415667 2962598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 10:31:43.428524 2962598 docker.go:218] disabling cri-docker service (if available) ...
	I1217 10:31:43.428575 2962598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 10:31:43.446194 2962598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 10:31:43.464208 2962598 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 10:31:43.576947 2962598 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 10:31:43.692216 2962598 docker.go:234] disabling docker service ...
	I1217 10:31:43.692270 2962598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 10:31:43.716018 2962598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 10:31:43.729492 2962598 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 10:31:43.847992 2962598 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 10:31:43.965222 2962598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 10:31:43.978232 2962598 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 10:31:43.993695 2962598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 10:31:44.003710 2962598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 10:31:44.014438 2962598 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 10:31:44.014497 2962598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 10:31:44.024409 2962598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 10:31:44.033708 2962598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 10:31:44.042666 2962598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 10:31:44.052046 2962598 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 10:31:44.060635 2962598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 10:31:44.069867 2962598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 10:31:44.078580 2962598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 10:31:44.087841 2962598 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 10:31:44.095695 2962598 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 10:31:44.103761 2962598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 10:31:44.210981 2962598 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 10:31:44.337843 2962598 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 10:31:44.337903 2962598 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 10:31:44.341763 2962598 start.go:564] Will wait 60s for crictl version
	I1217 10:31:44.341817 2962598 ssh_runner.go:195] Run: which crictl
	I1217 10:31:44.345498 2962598 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 10:31:44.376350 2962598 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 10:31:44.376443 2962598 ssh_runner.go:195] Run: containerd --version
	I1217 10:31:44.397410 2962598 ssh_runner.go:195] Run: containerd --version
	I1217 10:31:44.424578 2962598 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1217 10:31:44.427566 2962598 cli_runner.go:164] Run: docker network inspect functional-232588 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 10:31:44.443543 2962598 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 10:31:44.447409 2962598 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 10:31:44.456948 2962598 kubeadm.go:884] updating cluster {Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 10:31:44.457049 2962598 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 10:31:44.457117 2962598 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 10:31:44.482983 2962598 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 10:31:44.482995 2962598 containerd.go:534] Images already preloaded, skipping extraction
	I1217 10:31:44.483051 2962598 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 10:31:44.511600 2962598 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 10:31:44.511611 2962598 cache_images.go:86] Images are preloaded, skipping loading
	I1217 10:31:44.511618 2962598 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1217 10:31:44.511712 2962598 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-232588 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 10:31:44.511775 2962598 ssh_runner.go:195] Run: sudo crictl info
	I1217 10:31:44.537024 2962598 cni.go:84] Creating CNI manager for ""
	I1217 10:31:44.537034 2962598 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 10:31:44.537046 2962598 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 10:31:44.537066 2962598 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-232588 NodeName:functional-232588 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 10:31:44.537172 2962598 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-232588"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 10:31:44.537239 2962598 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 10:31:44.545112 2962598 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 10:31:44.545172 2962598 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 10:31:44.553167 2962598 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1217 10:31:44.567465 2962598 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 10:31:44.580630 2962598 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1217 10:31:44.593068 2962598 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 10:31:44.596992 2962598 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 10:31:44.606592 2962598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 10:31:44.727401 2962598 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 10:31:44.743637 2962598 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588 for IP: 192.168.49.2
	I1217 10:31:44.743648 2962598 certs.go:195] generating shared ca certs ...
	I1217 10:31:44.743663 2962598 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:31:44.743810 2962598 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 10:31:44.743866 2962598 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 10:31:44.743873 2962598 certs.go:257] generating profile certs ...
	I1217 10:31:44.743966 2962598 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.key
	I1217 10:31:44.743976 2962598 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt with IP's: []
	I1217 10:31:45.239331 2962598 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt ...
	I1217 10:31:45.239352 2962598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: {Name:mke8d0aba80eff817d699ddf08fa998e09130a63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:31:45.239767 2962598 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.key ...
	I1217 10:31:45.239780 2962598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.key: {Name:mk92276b55e08255e34c5bb60ae0a6286e9cc7b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:31:45.240108 2962598 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key.a39919a0
	I1217 10:31:45.240144 2962598 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.crt.a39919a0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1217 10:31:45.432555 2962598 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.crt.a39919a0 ...
	I1217 10:31:45.432571 2962598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.crt.a39919a0: {Name:mk69b41df65ca444e27b1eaeeeb71b80be470429 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:31:45.432778 2962598 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key.a39919a0 ...
	I1217 10:31:45.432787 2962598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key.a39919a0: {Name:mk4bf5862570a3598af337baf828881e83fdf726 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:31:45.432873 2962598 certs.go:382] copying /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.crt.a39919a0 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.crt
	I1217 10:31:45.432957 2962598 certs.go:386] copying /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key.a39919a0 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key
	I1217 10:31:45.433010 2962598 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key
	I1217 10:31:45.433022 2962598 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.crt with IP's: []
	I1217 10:31:45.576205 2962598 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.crt ...
	I1217 10:31:45.576222 2962598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.crt: {Name:mkb30de1a52b11f1f4c1e2a381dead0eecaeb6ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:31:45.576429 2962598 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key ...
	I1217 10:31:45.576438 2962598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key: {Name:mkb9459de3f2240513b863aae0f32feb70e79e66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:31:45.576655 2962598 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 10:31:45.576697 2962598 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 10:31:45.576706 2962598 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 10:31:45.576731 2962598 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 10:31:45.576757 2962598 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 10:31:45.576779 2962598 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 10:31:45.576827 2962598 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 10:31:45.577388 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 10:31:45.597721 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 10:31:45.616717 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 10:31:45.634970 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 10:31:45.654423 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 10:31:45.672383 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 10:31:45.691724 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 10:31:45.710414 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 10:31:45.728226 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 10:31:45.746586 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 10:31:45.764868 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 10:31:45.783193 2962598 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 10:31:45.796026 2962598 ssh_runner.go:195] Run: openssl version
	I1217 10:31:45.803008 2962598 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:31:45.810596 2962598 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 10:31:45.818301 2962598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:31:45.822066 2962598 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:31:45.822122 2962598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:31:45.874917 2962598 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 10:31:45.884364 2962598 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 10:31:45.892610 2962598 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 10:31:45.900194 2962598 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 10:31:45.907971 2962598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 10:31:45.911619 2962598 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 10:31:45.911677 2962598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 10:31:45.952616 2962598 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 10:31:45.960252 2962598 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2924574.pem /etc/ssl/certs/51391683.0
	I1217 10:31:45.967591 2962598 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 10:31:45.975134 2962598 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 10:31:45.982998 2962598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 10:31:45.986896 2962598 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 10:31:45.986957 2962598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 10:31:46.028227 2962598 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 10:31:46.035709 2962598 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/29245742.pem /etc/ssl/certs/3ec20f2e.0
	I1217 10:31:46.043338 2962598 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 10:31:46.046957 2962598 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 10:31:46.046999 2962598 kubeadm.go:401] StartCluster: {Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:31:46.047067 2962598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 10:31:46.047134 2962598 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 10:31:46.072899 2962598 cri.go:89] found id: ""
	I1217 10:31:46.072960 2962598 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 10:31:46.080716 2962598 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 10:31:46.088669 2962598 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 10:31:46.088725 2962598 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 10:31:46.096738 2962598 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 10:31:46.096759 2962598 kubeadm.go:158] found existing configuration files:
	
	I1217 10:31:46.096826 2962598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 10:31:46.104765 2962598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 10:31:46.104842 2962598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 10:31:46.112659 2962598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 10:31:46.120614 2962598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 10:31:46.120669 2962598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 10:31:46.128110 2962598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 10:31:46.136107 2962598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 10:31:46.136162 2962598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 10:31:46.143905 2962598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 10:31:46.151763 2962598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 10:31:46.151833 2962598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 10:31:46.159373 2962598 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 10:31:46.293225 2962598 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 10:31:46.293650 2962598 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 10:31:46.363441 2962598 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 10:35:50.364017 2962598 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 10:35:50.364050 2962598 kubeadm.go:319] 
	I1217 10:35:50.364358 2962598 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 10:35:50.372588 2962598 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 10:35:50.372641 2962598 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 10:35:50.372732 2962598 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 10:35:50.372789 2962598 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 10:35:50.372824 2962598 kubeadm.go:319] OS: Linux
	I1217 10:35:50.372870 2962598 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 10:35:50.372922 2962598 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 10:35:50.372970 2962598 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 10:35:50.373022 2962598 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 10:35:50.373069 2962598 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 10:35:50.373118 2962598 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 10:35:50.373164 2962598 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 10:35:50.373217 2962598 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 10:35:50.373260 2962598 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 10:35:50.373327 2962598 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 10:35:50.373447 2962598 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 10:35:50.373557 2962598 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 10:35:50.373619 2962598 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 10:35:50.376819 2962598 out.go:252]   - Generating certificates and keys ...
	I1217 10:35:50.376914 2962598 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 10:35:50.376982 2962598 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 10:35:50.377075 2962598 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 10:35:50.377142 2962598 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 10:35:50.377198 2962598 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 10:35:50.377258 2962598 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 10:35:50.377317 2962598 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 10:35:50.377441 2962598 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-232588 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 10:35:50.377494 2962598 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 10:35:50.377620 2962598 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-232588 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 10:35:50.377684 2962598 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 10:35:50.377752 2962598 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 10:35:50.377794 2962598 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 10:35:50.377869 2962598 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 10:35:50.377930 2962598 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 10:35:50.377984 2962598 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 10:35:50.378036 2962598 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 10:35:50.378115 2962598 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 10:35:50.378169 2962598 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 10:35:50.378248 2962598 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 10:35:50.378326 2962598 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 10:35:50.381334 2962598 out.go:252]   - Booting up control plane ...
	I1217 10:35:50.381432 2962598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 10:35:50.381509 2962598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 10:35:50.381575 2962598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 10:35:50.381691 2962598 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 10:35:50.381787 2962598 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 10:35:50.381914 2962598 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 10:35:50.382023 2962598 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 10:35:50.382063 2962598 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 10:35:50.382206 2962598 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 10:35:50.382313 2962598 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 10:35:50.382382 2962598 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000105065s
	I1217 10:35:50.382385 2962598 kubeadm.go:319] 
	I1217 10:35:50.382440 2962598 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 10:35:50.382471 2962598 kubeadm.go:319] 	- The kubelet is not running
	I1217 10:35:50.382590 2962598 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 10:35:50.382593 2962598 kubeadm.go:319] 
	I1217 10:35:50.382696 2962598 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 10:35:50.382733 2962598 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 10:35:50.382761 2962598 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 10:35:50.382784 2962598 kubeadm.go:319] 
	W1217 10:35:50.382920 2962598 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-232588 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-232588 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000105065s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 10:35:50.383008 2962598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 10:35:50.790986 2962598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 10:35:50.804781 2962598 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 10:35:50.804833 2962598 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 10:35:50.812969 2962598 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 10:35:50.812978 2962598 kubeadm.go:158] found existing configuration files:
	
	I1217 10:35:50.813029 2962598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 10:35:50.821164 2962598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 10:35:50.821221 2962598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 10:35:50.829116 2962598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 10:35:50.837654 2962598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 10:35:50.837712 2962598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 10:35:50.846052 2962598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 10:35:50.854274 2962598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 10:35:50.854337 2962598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 10:35:50.862425 2962598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 10:35:50.870630 2962598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 10:35:50.870689 2962598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 10:35:50.878320 2962598 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 10:35:50.918242 2962598 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 10:35:50.918310 2962598 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 10:35:50.994518 2962598 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 10:35:50.994583 2962598 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 10:35:50.994617 2962598 kubeadm.go:319] OS: Linux
	I1217 10:35:50.994661 2962598 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 10:35:50.994719 2962598 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 10:35:50.994765 2962598 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 10:35:50.994813 2962598 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 10:35:50.994894 2962598 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 10:35:50.994961 2962598 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 10:35:50.995007 2962598 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 10:35:50.995069 2962598 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 10:35:50.995122 2962598 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 10:35:51.065555 2962598 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 10:35:51.065659 2962598 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 10:35:51.065748 2962598 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 10:35:51.072939 2962598 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 10:35:51.078279 2962598 out.go:252]   - Generating certificates and keys ...
	I1217 10:35:51.078360 2962598 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 10:35:51.078423 2962598 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 10:35:51.078508 2962598 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 10:35:51.078569 2962598 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 10:35:51.078638 2962598 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 10:35:51.078691 2962598 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 10:35:51.078758 2962598 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 10:35:51.078818 2962598 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 10:35:51.078892 2962598 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 10:35:51.078969 2962598 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 10:35:51.079005 2962598 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 10:35:51.079059 2962598 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 10:35:51.417731 2962598 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 10:35:51.963655 2962598 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 10:35:52.437400 2962598 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 10:35:52.623651 2962598 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 10:35:52.761132 2962598 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 10:35:52.761895 2962598 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 10:35:52.765413 2962598 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 10:35:52.768580 2962598 out.go:252]   - Booting up control plane ...
	I1217 10:35:52.768684 2962598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 10:35:52.768762 2962598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 10:35:52.770868 2962598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 10:35:52.791571 2962598 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 10:35:52.791672 2962598 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 10:35:52.799060 2962598 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 10:35:52.799388 2962598 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 10:35:52.799565 2962598 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 10:35:52.940908 2962598 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 10:35:52.941043 2962598 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 10:39:52.935423 2962598 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001251054s
	I1217 10:39:52.935448 2962598 kubeadm.go:319] 
	I1217 10:39:52.935520 2962598 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 10:39:52.935575 2962598 kubeadm.go:319] 	- The kubelet is not running
	I1217 10:39:52.935686 2962598 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 10:39:52.935691 2962598 kubeadm.go:319] 
	I1217 10:39:52.935794 2962598 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 10:39:52.935833 2962598 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 10:39:52.935864 2962598 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 10:39:52.935867 2962598 kubeadm.go:319] 
	I1217 10:39:52.939982 2962598 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 10:39:52.940396 2962598 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 10:39:52.940572 2962598 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 10:39:52.940861 2962598 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 10:39:52.940868 2962598 kubeadm.go:319] 
	I1217 10:39:52.940948 2962598 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 10:39:52.941009 2962598 kubeadm.go:403] duration metric: took 8m6.894014029s to StartCluster
	I1217 10:39:52.941055 2962598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:39:52.941117 2962598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:39:52.966691 2962598 cri.go:89] found id: ""
	I1217 10:39:52.966706 2962598 logs.go:282] 0 containers: []
	W1217 10:39:52.966713 2962598 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:39:52.966720 2962598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:39:52.966782 2962598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:39:52.991119 2962598 cri.go:89] found id: ""
	I1217 10:39:52.991133 2962598 logs.go:282] 0 containers: []
	W1217 10:39:52.991140 2962598 logs.go:284] No container was found matching "etcd"
	I1217 10:39:52.991145 2962598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:39:52.991203 2962598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:39:53.022458 2962598 cri.go:89] found id: ""
	I1217 10:39:53.022473 2962598 logs.go:282] 0 containers: []
	W1217 10:39:53.022480 2962598 logs.go:284] No container was found matching "coredns"
	I1217 10:39:53.022486 2962598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:39:53.022549 2962598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:39:53.056616 2962598 cri.go:89] found id: ""
	I1217 10:39:53.056631 2962598 logs.go:282] 0 containers: []
	W1217 10:39:53.056639 2962598 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:39:53.056644 2962598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:39:53.056716 2962598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:39:53.097546 2962598 cri.go:89] found id: ""
	I1217 10:39:53.097560 2962598 logs.go:282] 0 containers: []
	W1217 10:39:53.097568 2962598 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:39:53.097573 2962598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:39:53.097633 2962598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:39:53.123414 2962598 cri.go:89] found id: ""
	I1217 10:39:53.123436 2962598 logs.go:282] 0 containers: []
	W1217 10:39:53.123444 2962598 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:39:53.123450 2962598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:39:53.123515 2962598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:39:53.148817 2962598 cri.go:89] found id: ""
	I1217 10:39:53.148832 2962598 logs.go:282] 0 containers: []
	W1217 10:39:53.148840 2962598 logs.go:284] No container was found matching "kindnet"
	I1217 10:39:53.148855 2962598 logs.go:123] Gathering logs for kubelet ...
	I1217 10:39:53.148865 2962598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:39:53.206472 2962598 logs.go:123] Gathering logs for dmesg ...
	I1217 10:39:53.206491 2962598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:39:53.223677 2962598 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:39:53.223697 2962598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:39:53.291954 2962598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:39:53.282996    4783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:39:53.283561    4783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:39:53.285255    4783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:39:53.285948    4783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:39:53.287682    4783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:39:53.282996    4783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:39:53.283561    4783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:39:53.285255    4783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:39:53.285948    4783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:39:53.287682    4783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:39:53.291964 2962598 logs.go:123] Gathering logs for containerd ...
	I1217 10:39:53.291975 2962598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:39:53.330138 2962598 logs.go:123] Gathering logs for container status ...
	I1217 10:39:53.330156 2962598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 10:39:53.360079 2962598 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001251054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 10:39:53.360136 2962598 out.go:285] * 
	W1217 10:39:53.360194 2962598 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001251054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 10:39:53.360205 2962598 out.go:285] * 
	W1217 10:39:53.362460 2962598 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 10:39:53.368847 2962598 out.go:203] 
	W1217 10:39:53.371684 2962598 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001251054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 10:39:53.371720 2962598 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 10:39:53.371746 2962598 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 10:39:53.374892 2962598 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.271707202Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.271718304Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.271762020Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.271776509Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.271789547Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.271802814Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.271811643Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.271837103Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.271855728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.271884847Z" level=info msg="Connect containerd service"
	Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.273794703Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.274382316Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.291778081Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.291856085Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.291882571Z" level=info msg="Start subscribing containerd event"
	Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.291930890Z" level=info msg="Start recovering state"
	Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.334179068Z" level=info msg="Start event monitor"
	Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.334377053Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.334443136Z" level=info msg="Start streaming server"
	Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.334507979Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.334565315Z" level=info msg="runtime interface starting up..."
	Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.334616768Z" level=info msg="starting plugins..."
	Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.334676648Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 10:31:44 functional-232588 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.336482711Z" level=info msg="containerd successfully booted in 0.084940s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:39:54.352937    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:39:54.353456    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:39:54.355207    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:39:54.355585    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:39:54.357076    4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +26.532481] overlayfs: idmapped layers are currently not supported
	[Dec17 09:26] overlayfs: idmapped layers are currently not supported
	[Dec17 09:27] overlayfs: idmapped layers are currently not supported
	[Dec17 09:29] overlayfs: idmapped layers are currently not supported
	[Dec17 09:31] overlayfs: idmapped layers are currently not supported
	[Dec17 09:41] overlayfs: idmapped layers are currently not supported
	[Dec17 09:43] overlayfs: idmapped layers are currently not supported
	[Dec17 09:44] overlayfs: idmapped layers are currently not supported
	[  +5.066669] overlayfs: idmapped layers are currently not supported
	[ +38.827173] overlayfs: idmapped layers are currently not supported
	[Dec17 09:45] overlayfs: idmapped layers are currently not supported
	[Dec17 09:46] overlayfs: idmapped layers are currently not supported
	[Dec17 09:48] overlayfs: idmapped layers are currently not supported
	[  +5.468161] overlayfs: idmapped layers are currently not supported
	[Dec17 09:49] overlayfs: idmapped layers are currently not supported
	[  +4.263444] overlayfs: idmapped layers are currently not supported
	[Dec17 09:50] overlayfs: idmapped layers are currently not supported
	[Dec17 10:07] overlayfs: idmapped layers are currently not supported
	[Dec17 10:08] overlayfs: idmapped layers are currently not supported
	[Dec17 10:10] overlayfs: idmapped layers are currently not supported
	[Dec17 10:11] overlayfs: idmapped layers are currently not supported
	[Dec17 10:13] overlayfs: idmapped layers are currently not supported
	[Dec17 10:15] overlayfs: idmapped layers are currently not supported
	[Dec17 10:16] overlayfs: idmapped layers are currently not supported
	[Dec17 10:21] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 10:39:54 up 16:22,  0 user,  load average: 0.10, 0.46, 1.07
	Linux functional-232588 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 10:39:50 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:39:51 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 17 10:39:51 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:39:51 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:39:51 functional-232588 kubelet[4704]: E1217 10:39:51.544857    4704 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:39:51 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:39:51 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:39:52 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 17 10:39:52 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:39:52 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:39:52 functional-232588 kubelet[4710]: E1217 10:39:52.299503    4710 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:39:52 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:39:52 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:39:52 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 17 10:39:52 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:39:53 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:39:53 functional-232588 kubelet[4738]: E1217 10:39:53.083365    4738 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:39:53 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:39:53 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:39:53 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 17 10:39:53 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:39:53 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:39:53 functional-232588 kubelet[4816]: E1217 10:39:53.828903    4816 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:39:53 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:39:53 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588: exit status 6 (353.915593ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 10:39:54.829278 2968295 status.go:458] kubeconfig endpoint: get endpoint: "functional-232588" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-232588" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (501.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (368.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart
I1217 10:39:54.844248 2924574 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-232588 --alsologtostderr -v=8
E1217 10:40:43.083094 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:41:10.790103 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:44:28.205326 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:45:43.083325 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:45:51.276058 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-232588 --alsologtostderr -v=8: exit status 80 (6m5.270450571s)

                                                
                                                
-- stdout --
	* [functional-232588] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-232588" primary control-plane node in "functional-232588" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 10:39:54.887492 2968376 out.go:360] Setting OutFile to fd 1 ...
	I1217 10:39:54.887669 2968376 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:39:54.887679 2968376 out.go:374] Setting ErrFile to fd 2...
	I1217 10:39:54.887684 2968376 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:39:54.887953 2968376 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 10:39:54.888377 2968376 out.go:368] Setting JSON to false
	I1217 10:39:54.889321 2968376 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":58945,"bootTime":1765909050,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 10:39:54.889394 2968376 start.go:143] virtualization:  
	I1217 10:39:54.892820 2968376 out.go:179] * [functional-232588] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 10:39:54.896642 2968376 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 10:39:54.896710 2968376 notify.go:221] Checking for updates...
	I1217 10:39:54.900325 2968376 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 10:39:54.903432 2968376 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:39:54.906306 2968376 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 10:39:54.909105 2968376 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 10:39:54.911889 2968376 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 10:39:54.915217 2968376 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 10:39:54.915331 2968376 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 10:39:54.937972 2968376 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 10:39:54.938091 2968376 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:39:55.000760 2968376 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 10:39:54.991784263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:39:55.000879 2968376 docker.go:319] overlay module found
	I1217 10:39:55.005745 2968376 out.go:179] * Using the docker driver based on existing profile
	I1217 10:39:55.010762 2968376 start.go:309] selected driver: docker
	I1217 10:39:55.010794 2968376 start.go:927] validating driver "docker" against &{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:39:55.010914 2968376 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 10:39:55.011044 2968376 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:39:55.065164 2968376 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 10:39:55.056463493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:39:55.065569 2968376 cni.go:84] Creating CNI manager for ""
	I1217 10:39:55.065633 2968376 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 10:39:55.065694 2968376 start.go:353] cluster config:
	{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:39:55.070664 2968376 out.go:179] * Starting "functional-232588" primary control-plane node in "functional-232588" cluster
	I1217 10:39:55.073373 2968376 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 10:39:55.076286 2968376 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 10:39:55.079282 2968376 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 10:39:55.079315 2968376 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 10:39:55.079350 2968376 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1217 10:39:55.079358 2968376 cache.go:65] Caching tarball of preloaded images
	I1217 10:39:55.079437 2968376 preload.go:238] Found /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 10:39:55.079447 2968376 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1217 10:39:55.079550 2968376 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/config.json ...
	I1217 10:39:55.100219 2968376 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 10:39:55.100251 2968376 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 10:39:55.100265 2968376 cache.go:243] Successfully downloaded all kic artifacts
	I1217 10:39:55.100297 2968376 start.go:360] acquireMachinesLock for functional-232588: {Name:mkb7828f32963a62377c74058da795e63eb677f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 10:39:55.100355 2968376 start.go:364] duration metric: took 36.061µs to acquireMachinesLock for "functional-232588"
	I1217 10:39:55.100378 2968376 start.go:96] Skipping create...Using existing machine configuration
	I1217 10:39:55.100389 2968376 fix.go:54] fixHost starting: 
	I1217 10:39:55.100690 2968376 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:39:55.118322 2968376 fix.go:112] recreateIfNeeded on functional-232588: state=Running err=<nil>
	W1217 10:39:55.118352 2968376 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 10:39:55.121614 2968376 out.go:252] * Updating the running docker "functional-232588" container ...
	I1217 10:39:55.121666 2968376 machine.go:94] provisionDockerMachine start ...
	I1217 10:39:55.121762 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:55.140448 2968376 main.go:143] libmachine: Using SSH client type: native
	I1217 10:39:55.140568 2968376 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:39:55.140576 2968376 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 10:39:55.272992 2968376 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232588
	
	I1217 10:39:55.273058 2968376 ubuntu.go:182] provisioning hostname "functional-232588"
	I1217 10:39:55.273155 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:55.294100 2968376 main.go:143] libmachine: Using SSH client type: native
	I1217 10:39:55.294200 2968376 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:39:55.294209 2968376 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-232588 && echo "functional-232588" | sudo tee /etc/hostname
	I1217 10:39:55.433566 2968376 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232588
	
	I1217 10:39:55.433651 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:55.452012 2968376 main.go:143] libmachine: Using SSH client type: native
	I1217 10:39:55.452130 2968376 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:39:55.452152 2968376 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-232588' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-232588/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-232588' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 10:39:55.584734 2968376 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 10:39:55.584801 2968376 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 10:39:55.584835 2968376 ubuntu.go:190] setting up certificates
	I1217 10:39:55.584846 2968376 provision.go:84] configureAuth start
	I1217 10:39:55.584917 2968376 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232588
	I1217 10:39:55.602169 2968376 provision.go:143] copyHostCerts
	I1217 10:39:55.602226 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 10:39:55.602261 2968376 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 10:39:55.602273 2968376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 10:39:55.602347 2968376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 10:39:55.602482 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 10:39:55.602507 2968376 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 10:39:55.602512 2968376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 10:39:55.602540 2968376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 10:39:55.602588 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 10:39:55.602609 2968376 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 10:39:55.602618 2968376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 10:39:55.602651 2968376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 10:39:55.602701 2968376 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.functional-232588 san=[127.0.0.1 192.168.49.2 functional-232588 localhost minikube]
	I1217 10:39:55.859794 2968376 provision.go:177] copyRemoteCerts
	I1217 10:39:55.859877 2968376 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 10:39:55.859950 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:55.877144 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:55.974879 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 10:39:55.974962 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 10:39:55.992960 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 10:39:55.993024 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 10:39:56.017007 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 10:39:56.017075 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 10:39:56.039037 2968376 provision.go:87] duration metric: took 454.177473ms to configureAuth
	I1217 10:39:56.039062 2968376 ubuntu.go:206] setting minikube options for container-runtime
	I1217 10:39:56.039248 2968376 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 10:39:56.039255 2968376 machine.go:97] duration metric: took 917.583269ms to provisionDockerMachine
	I1217 10:39:56.039263 2968376 start.go:293] postStartSetup for "functional-232588" (driver="docker")
	I1217 10:39:56.039274 2968376 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 10:39:56.039330 2968376 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 10:39:56.039374 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:56.064674 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:56.164379 2968376 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 10:39:56.167903 2968376 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1217 10:39:56.167924 2968376 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1217 10:39:56.167929 2968376 command_runner.go:130] > VERSION_ID="12"
	I1217 10:39:56.167934 2968376 command_runner.go:130] > VERSION="12 (bookworm)"
	I1217 10:39:56.167939 2968376 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1217 10:39:56.167943 2968376 command_runner.go:130] > ID=debian
	I1217 10:39:56.167947 2968376 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1217 10:39:56.167952 2968376 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1217 10:39:56.167958 2968376 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1217 10:39:56.168026 2968376 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 10:39:56.168043 2968376 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 10:39:56.168054 2968376 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 10:39:56.168116 2968376 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 10:39:56.168193 2968376 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 10:39:56.168199 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> /etc/ssl/certs/29245742.pem
	I1217 10:39:56.168276 2968376 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts -> hosts in /etc/test/nested/copy/2924574
	I1217 10:39:56.168280 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts -> /etc/test/nested/copy/2924574/hosts
	I1217 10:39:56.168325 2968376 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2924574
	I1217 10:39:56.175992 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 10:39:56.194065 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts --> /etc/test/nested/copy/2924574/hosts (40 bytes)
	I1217 10:39:56.211618 2968376 start.go:296] duration metric: took 172.340234ms for postStartSetup
	I1217 10:39:56.211696 2968376 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 10:39:56.211740 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:56.229142 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:56.321408 2968376 command_runner.go:130] > 18%
	I1217 10:39:56.321497 2968376 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 10:39:56.325775 2968376 command_runner.go:130] > 160G
	I1217 10:39:56.326243 2968376 fix.go:56] duration metric: took 1.225850623s for fixHost
	I1217 10:39:56.326261 2968376 start.go:83] releasing machines lock for "functional-232588", held for 1.22589425s
	I1217 10:39:56.326382 2968376 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232588
	I1217 10:39:56.351440 2968376 ssh_runner.go:195] Run: cat /version.json
	I1217 10:39:56.351467 2968376 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 10:39:56.351509 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:56.351532 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:56.377953 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:56.378286 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:56.472298 2968376 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1217 10:39:56.558575 2968376 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1217 10:39:56.561329 2968376 ssh_runner.go:195] Run: systemctl --version
	I1217 10:39:56.567378 2968376 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1217 10:39:56.567418 2968376 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1217 10:39:56.567866 2968376 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1217 10:39:56.572178 2968376 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1217 10:39:56.572242 2968376 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 10:39:56.572327 2968376 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 10:39:56.580077 2968376 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 10:39:56.580102 2968376 start.go:496] detecting cgroup driver to use...
	I1217 10:39:56.580153 2968376 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 10:39:56.580207 2968376 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 10:39:56.595473 2968376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 10:39:56.608619 2968376 docker.go:218] disabling cri-docker service (if available) ...
	I1217 10:39:56.608683 2968376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 10:39:56.624626 2968376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 10:39:56.639198 2968376 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 10:39:56.750544 2968376 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 10:39:56.881240 2968376 docker.go:234] disabling docker service ...
	I1217 10:39:56.881321 2968376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 10:39:56.896533 2968376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 10:39:56.909686 2968376 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 10:39:57.029179 2968376 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 10:39:57.147650 2968376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 10:39:57.160165 2968376 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 10:39:57.172821 2968376 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1217 10:39:57.174291 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 10:39:57.183184 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 10:39:57.192049 2968376 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 10:39:57.192173 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 10:39:57.201301 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 10:39:57.210430 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 10:39:57.219288 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 10:39:57.228051 2968376 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 10:39:57.235994 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 10:39:57.245724 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 10:39:57.254416 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 10:39:57.263062 2968376 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 10:39:57.269668 2968376 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1217 10:39:57.270584 2968376 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 10:39:57.278345 2968376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 10:39:57.386138 2968376 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 10:39:57.532674 2968376 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 10:39:57.532750 2968376 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 10:39:57.536608 2968376 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1217 10:39:57.536637 2968376 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1217 10:39:57.536644 2968376 command_runner.go:130] > Device: 0,72	Inode: 1613        Links: 1
	I1217 10:39:57.536652 2968376 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 10:39:57.536659 2968376 command_runner.go:130] > Access: 2025-12-17 10:39:57.469772687 +0000
	I1217 10:39:57.536664 2968376 command_runner.go:130] > Modify: 2025-12-17 10:39:57.469772687 +0000
	I1217 10:39:57.536669 2968376 command_runner.go:130] > Change: 2025-12-17 10:39:57.469772687 +0000
	I1217 10:39:57.536673 2968376 command_runner.go:130] >  Birth: -
	I1217 10:39:57.537168 2968376 start.go:564] Will wait 60s for crictl version
	I1217 10:39:57.537224 2968376 ssh_runner.go:195] Run: which crictl
	I1217 10:39:57.540827 2968376 command_runner.go:130] > /usr/local/bin/crictl
	I1217 10:39:57.541302 2968376 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 10:39:57.573267 2968376 command_runner.go:130] > Version:  0.1.0
	I1217 10:39:57.573463 2968376 command_runner.go:130] > RuntimeName:  containerd
	I1217 10:39:57.573480 2968376 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1217 10:39:57.573656 2968376 command_runner.go:130] > RuntimeApiVersion:  v1
	I1217 10:39:57.575908 2968376 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 10:39:57.575979 2968376 ssh_runner.go:195] Run: containerd --version
	I1217 10:39:57.593702 2968376 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1217 10:39:57.595828 2968376 ssh_runner.go:195] Run: containerd --version
	I1217 10:39:57.613025 2968376 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1217 10:39:57.620756 2968376 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1217 10:39:57.623690 2968376 cli_runner.go:164] Run: docker network inspect functional-232588 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 10:39:57.639560 2968376 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 10:39:57.643332 2968376 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1217 10:39:57.643691 2968376 kubeadm.go:884] updating cluster {Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 10:39:57.643808 2968376 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 10:39:57.643873 2968376 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 10:39:57.668138 2968376 command_runner.go:130] > {
	I1217 10:39:57.668155 2968376 command_runner.go:130] >   "images":  [
	I1217 10:39:57.668160 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668169 2968376 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 10:39:57.668174 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668179 2968376 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 10:39:57.668183 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668187 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668196 2968376 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1217 10:39:57.668199 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668204 2968376 command_runner.go:130] >       "size":  "40636774",
	I1217 10:39:57.668208 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668212 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668215 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668218 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668226 2968376 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 10:39:57.668231 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668236 2968376 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 10:39:57.668239 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668244 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668252 2968376 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 10:39:57.668260 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668264 2968376 command_runner.go:130] >       "size":  "8034419",
	I1217 10:39:57.668267 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668271 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668274 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668278 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668284 2968376 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 10:39:57.668288 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668293 2968376 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 10:39:57.668296 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668303 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668311 2968376 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1217 10:39:57.668314 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668319 2968376 command_runner.go:130] >       "size":  "21168808",
	I1217 10:39:57.668323 2968376 command_runner.go:130] >       "username":  "nonroot",
	I1217 10:39:57.668327 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668330 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668333 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668340 2968376 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1217 10:39:57.668344 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668348 2968376 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1217 10:39:57.668351 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668355 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668363 2968376 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1217 10:39:57.668366 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668370 2968376 command_runner.go:130] >       "size":  "21749640",
	I1217 10:39:57.668375 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.668379 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.668382 2968376 command_runner.go:130] >       },
	I1217 10:39:57.668386 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668390 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668393 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668396 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668405 2968376 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1217 10:39:57.668409 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668433 2968376 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1217 10:39:57.668438 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668442 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668450 2968376 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1217 10:39:57.668454 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668458 2968376 command_runner.go:130] >       "size":  "24692223",
	I1217 10:39:57.668461 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.668470 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.668478 2968376 command_runner.go:130] >       },
	I1217 10:39:57.668482 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668485 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668489 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668492 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668498 2968376 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1217 10:39:57.668503 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668509 2968376 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1217 10:39:57.668512 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668517 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668530 2968376 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1217 10:39:57.668537 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668542 2968376 command_runner.go:130] >       "size":  "20672157",
	I1217 10:39:57.668545 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.668549 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.668557 2968376 command_runner.go:130] >       },
	I1217 10:39:57.668562 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668576 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668580 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668583 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668589 2968376 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1217 10:39:57.668593 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668598 2968376 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1217 10:39:57.668608 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668614 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668622 2968376 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1217 10:39:57.668629 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668633 2968376 command_runner.go:130] >       "size":  "22432301",
	I1217 10:39:57.668637 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668641 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668645 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668648 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668655 2968376 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1217 10:39:57.668662 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668668 2968376 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1217 10:39:57.668672 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668678 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668689 2968376 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1217 10:39:57.668692 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668696 2968376 command_runner.go:130] >       "size":  "15405535",
	I1217 10:39:57.668702 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.668706 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.668719 2968376 command_runner.go:130] >       },
	I1217 10:39:57.668723 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668726 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668730 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668734 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668740 2968376 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 10:39:57.668748 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668753 2968376 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 10:39:57.668756 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668760 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668767 2968376 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1217 10:39:57.668773 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668777 2968376 command_runner.go:130] >       "size":  "267939",
	I1217 10:39:57.668781 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.668792 2968376 command_runner.go:130] >         "value":  "65535"
	I1217 10:39:57.668799 2968376 command_runner.go:130] >       },
	I1217 10:39:57.668803 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668807 2968376 command_runner.go:130] >       "pinned":  true
	I1217 10:39:57.668810 2968376 command_runner.go:130] >     }
	I1217 10:39:57.668813 2968376 command_runner.go:130] >   ]
	I1217 10:39:57.668816 2968376 command_runner.go:130] > }
	I1217 10:39:57.671107 2968376 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 10:39:57.671128 2968376 containerd.go:534] Images already preloaded, skipping extraction
	I1217 10:39:57.671185 2968376 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 10:39:57.697059 2968376 command_runner.go:130] > {
	I1217 10:39:57.697078 2968376 command_runner.go:130] >   "images":  [
	I1217 10:39:57.697083 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697093 2968376 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 10:39:57.697108 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697114 2968376 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 10:39:57.697118 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697122 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697131 2968376 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1217 10:39:57.697142 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697147 2968376 command_runner.go:130] >       "size":  "40636774",
	I1217 10:39:57.697155 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697159 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697162 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697166 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697175 2968376 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 10:39:57.697180 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697185 2968376 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 10:39:57.697188 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697192 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697202 2968376 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 10:39:57.697205 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697209 2968376 command_runner.go:130] >       "size":  "8034419",
	I1217 10:39:57.697213 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697216 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697219 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697222 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697229 2968376 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 10:39:57.697233 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697238 2968376 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 10:39:57.697242 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697249 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697256 2968376 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1217 10:39:57.697260 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697264 2968376 command_runner.go:130] >       "size":  "21168808",
	I1217 10:39:57.697268 2968376 command_runner.go:130] >       "username":  "nonroot",
	I1217 10:39:57.697272 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697275 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697278 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697284 2968376 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1217 10:39:57.697288 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697293 2968376 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1217 10:39:57.697296 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697300 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697310 2968376 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1217 10:39:57.697314 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697318 2968376 command_runner.go:130] >       "size":  "21749640",
	I1217 10:39:57.697323 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.697327 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.697330 2968376 command_runner.go:130] >       },
	I1217 10:39:57.697334 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697338 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697341 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697344 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697350 2968376 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1217 10:39:57.697354 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697359 2968376 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1217 10:39:57.697363 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697366 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697374 2968376 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1217 10:39:57.697377 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697381 2968376 command_runner.go:130] >       "size":  "24692223",
	I1217 10:39:57.697384 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.697393 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.697396 2968376 command_runner.go:130] >       },
	I1217 10:39:57.697400 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697403 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697406 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697409 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697416 2968376 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1217 10:39:57.697419 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697425 2968376 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1217 10:39:57.697428 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697432 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697440 2968376 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1217 10:39:57.697443 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697448 2968376 command_runner.go:130] >       "size":  "20672157",
	I1217 10:39:57.697460 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.697464 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.697467 2968376 command_runner.go:130] >       },
	I1217 10:39:57.697470 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697474 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697477 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697480 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697486 2968376 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1217 10:39:57.697490 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697495 2968376 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1217 10:39:57.697498 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697501 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697509 2968376 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1217 10:39:57.697512 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697515 2968376 command_runner.go:130] >       "size":  "22432301",
	I1217 10:39:57.697519 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697523 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697526 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697530 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697536 2968376 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1217 10:39:57.697540 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697545 2968376 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1217 10:39:57.697548 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697552 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697560 2968376 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1217 10:39:57.697563 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697567 2968376 command_runner.go:130] >       "size":  "15405535",
	I1217 10:39:57.697570 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.697574 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.697578 2968376 command_runner.go:130] >       },
	I1217 10:39:57.697581 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697585 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697588 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697594 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697600 2968376 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 10:39:57.697604 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697609 2968376 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 10:39:57.697612 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697615 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697622 2968376 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1217 10:39:57.697626 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697630 2968376 command_runner.go:130] >       "size":  "267939",
	I1217 10:39:57.697633 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.697637 2968376 command_runner.go:130] >         "value":  "65535"
	I1217 10:39:57.697641 2968376 command_runner.go:130] >       },
	I1217 10:39:57.697645 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697649 2968376 command_runner.go:130] >       "pinned":  true
	I1217 10:39:57.697652 2968376 command_runner.go:130] >     }
	I1217 10:39:57.697655 2968376 command_runner.go:130] >   ]
	I1217 10:39:57.697657 2968376 command_runner.go:130] > }
	I1217 10:39:57.699989 2968376 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 10:39:57.700059 2968376 cache_images.go:86] Images are preloaded, skipping loading
	I1217 10:39:57.700081 2968376 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1217 10:39:57.700225 2968376 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-232588 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 10:39:57.700311 2968376 ssh_runner.go:195] Run: sudo crictl info
	I1217 10:39:57.722782 2968376 command_runner.go:130] > {
	I1217 10:39:57.722800 2968376 command_runner.go:130] >   "cniconfig": {
	I1217 10:39:57.722805 2968376 command_runner.go:130] >     "Networks": [
	I1217 10:39:57.722813 2968376 command_runner.go:130] >       {
	I1217 10:39:57.722822 2968376 command_runner.go:130] >         "Config": {
	I1217 10:39:57.722827 2968376 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1217 10:39:57.722835 2968376 command_runner.go:130] >           "Name": "cni-loopback",
	I1217 10:39:57.722839 2968376 command_runner.go:130] >           "Plugins": [
	I1217 10:39:57.722843 2968376 command_runner.go:130] >             {
	I1217 10:39:57.722847 2968376 command_runner.go:130] >               "Network": {
	I1217 10:39:57.722851 2968376 command_runner.go:130] >                 "ipam": {},
	I1217 10:39:57.722856 2968376 command_runner.go:130] >                 "type": "loopback"
	I1217 10:39:57.722860 2968376 command_runner.go:130] >               },
	I1217 10:39:57.722866 2968376 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1217 10:39:57.722869 2968376 command_runner.go:130] >             }
	I1217 10:39:57.722873 2968376 command_runner.go:130] >           ],
	I1217 10:39:57.722882 2968376 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1217 10:39:57.722886 2968376 command_runner.go:130] >         },
	I1217 10:39:57.722893 2968376 command_runner.go:130] >         "IFName": "lo"
	I1217 10:39:57.722896 2968376 command_runner.go:130] >       }
	I1217 10:39:57.722899 2968376 command_runner.go:130] >     ],
	I1217 10:39:57.722908 2968376 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1217 10:39:57.722912 2968376 command_runner.go:130] >     "PluginDirs": [
	I1217 10:39:57.722915 2968376 command_runner.go:130] >       "/opt/cni/bin"
	I1217 10:39:57.722919 2968376 command_runner.go:130] >     ],
	I1217 10:39:57.722923 2968376 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1217 10:39:57.722926 2968376 command_runner.go:130] >     "Prefix": "eth"
	I1217 10:39:57.722930 2968376 command_runner.go:130] >   },
	I1217 10:39:57.722933 2968376 command_runner.go:130] >   "config": {
	I1217 10:39:57.722936 2968376 command_runner.go:130] >     "cdiSpecDirs": [
	I1217 10:39:57.722940 2968376 command_runner.go:130] >       "/etc/cdi",
	I1217 10:39:57.722944 2968376 command_runner.go:130] >       "/var/run/cdi"
	I1217 10:39:57.722948 2968376 command_runner.go:130] >     ],
	I1217 10:39:57.722952 2968376 command_runner.go:130] >     "cni": {
	I1217 10:39:57.722955 2968376 command_runner.go:130] >       "binDir": "",
	I1217 10:39:57.722959 2968376 command_runner.go:130] >       "binDirs": [
	I1217 10:39:57.722962 2968376 command_runner.go:130] >         "/opt/cni/bin"
	I1217 10:39:57.722965 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.722969 2968376 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1217 10:39:57.722973 2968376 command_runner.go:130] >       "confTemplate": "",
	I1217 10:39:57.722983 2968376 command_runner.go:130] >       "ipPref": "",
	I1217 10:39:57.722986 2968376 command_runner.go:130] >       "maxConfNum": 1,
	I1217 10:39:57.722991 2968376 command_runner.go:130] >       "setupSerially": false,
	I1217 10:39:57.722995 2968376 command_runner.go:130] >       "useInternalLoopback": false
	I1217 10:39:57.722998 2968376 command_runner.go:130] >     },
	I1217 10:39:57.723004 2968376 command_runner.go:130] >     "containerd": {
	I1217 10:39:57.723008 2968376 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1217 10:39:57.723013 2968376 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1217 10:39:57.723017 2968376 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1217 10:39:57.723021 2968376 command_runner.go:130] >       "runtimes": {
	I1217 10:39:57.723024 2968376 command_runner.go:130] >         "runc": {
	I1217 10:39:57.723029 2968376 command_runner.go:130] >           "ContainerAnnotations": null,
	I1217 10:39:57.723033 2968376 command_runner.go:130] >           "PodAnnotations": null,
	I1217 10:39:57.723038 2968376 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1217 10:39:57.723046 2968376 command_runner.go:130] >           "cgroupWritable": false,
	I1217 10:39:57.723050 2968376 command_runner.go:130] >           "cniConfDir": "",
	I1217 10:39:57.723054 2968376 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1217 10:39:57.723058 2968376 command_runner.go:130] >           "io_type": "",
	I1217 10:39:57.723061 2968376 command_runner.go:130] >           "options": {
	I1217 10:39:57.723065 2968376 command_runner.go:130] >             "BinaryName": "",
	I1217 10:39:57.723069 2968376 command_runner.go:130] >             "CriuImagePath": "",
	I1217 10:39:57.723074 2968376 command_runner.go:130] >             "CriuWorkPath": "",
	I1217 10:39:57.723077 2968376 command_runner.go:130] >             "IoGid": 0,
	I1217 10:39:57.723081 2968376 command_runner.go:130] >             "IoUid": 0,
	I1217 10:39:57.723085 2968376 command_runner.go:130] >             "NoNewKeyring": false,
	I1217 10:39:57.723089 2968376 command_runner.go:130] >             "Root": "",
	I1217 10:39:57.723092 2968376 command_runner.go:130] >             "ShimCgroup": "",
	I1217 10:39:57.723096 2968376 command_runner.go:130] >             "SystemdCgroup": false
	I1217 10:39:57.723100 2968376 command_runner.go:130] >           },
	I1217 10:39:57.723105 2968376 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1217 10:39:57.723111 2968376 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1217 10:39:57.723115 2968376 command_runner.go:130] >           "runtimePath": "",
	I1217 10:39:57.723120 2968376 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1217 10:39:57.723124 2968376 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1217 10:39:57.723128 2968376 command_runner.go:130] >           "snapshotter": ""
	I1217 10:39:57.723131 2968376 command_runner.go:130] >         }
	I1217 10:39:57.723134 2968376 command_runner.go:130] >       }
	I1217 10:39:57.723136 2968376 command_runner.go:130] >     },
	I1217 10:39:57.723146 2968376 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1217 10:39:57.723151 2968376 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1217 10:39:57.723156 2968376 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1217 10:39:57.723161 2968376 command_runner.go:130] >     "disableApparmor": false,
	I1217 10:39:57.723166 2968376 command_runner.go:130] >     "disableHugetlbController": true,
	I1217 10:39:57.723170 2968376 command_runner.go:130] >     "disableProcMount": false,
	I1217 10:39:57.723174 2968376 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1217 10:39:57.723177 2968376 command_runner.go:130] >     "enableCDI": true,
	I1217 10:39:57.723181 2968376 command_runner.go:130] >     "enableSelinux": false,
	I1217 10:39:57.723188 2968376 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1217 10:39:57.723195 2968376 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1217 10:39:57.723200 2968376 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1217 10:39:57.723204 2968376 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1217 10:39:57.723208 2968376 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1217 10:39:57.723212 2968376 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1217 10:39:57.723216 2968376 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1217 10:39:57.723222 2968376 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1217 10:39:57.723226 2968376 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1217 10:39:57.723231 2968376 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1217 10:39:57.723236 2968376 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1217 10:39:57.723241 2968376 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1217 10:39:57.723243 2968376 command_runner.go:130] >   },
	I1217 10:39:57.723247 2968376 command_runner.go:130] >   "features": {
	I1217 10:39:57.723251 2968376 command_runner.go:130] >     "supplemental_groups_policy": true
	I1217 10:39:57.723254 2968376 command_runner.go:130] >   },
	I1217 10:39:57.723257 2968376 command_runner.go:130] >   "golang": "go1.24.9",
	I1217 10:39:57.723267 2968376 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1217 10:39:57.723277 2968376 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1217 10:39:57.723281 2968376 command_runner.go:130] >   "runtimeHandlers": [
	I1217 10:39:57.723283 2968376 command_runner.go:130] >     {
	I1217 10:39:57.723287 2968376 command_runner.go:130] >       "features": {
	I1217 10:39:57.723291 2968376 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1217 10:39:57.723297 2968376 command_runner.go:130] >         "user_namespaces": true
	I1217 10:39:57.723299 2968376 command_runner.go:130] >       }
	I1217 10:39:57.723302 2968376 command_runner.go:130] >     },
	I1217 10:39:57.723305 2968376 command_runner.go:130] >     {
	I1217 10:39:57.723308 2968376 command_runner.go:130] >       "features": {
	I1217 10:39:57.723315 2968376 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1217 10:39:57.723319 2968376 command_runner.go:130] >         "user_namespaces": true
	I1217 10:39:57.723322 2968376 command_runner.go:130] >       },
	I1217 10:39:57.723326 2968376 command_runner.go:130] >       "name": "runc"
	I1217 10:39:57.723328 2968376 command_runner.go:130] >     }
	I1217 10:39:57.723335 2968376 command_runner.go:130] >   ],
	I1217 10:39:57.723338 2968376 command_runner.go:130] >   "status": {
	I1217 10:39:57.723342 2968376 command_runner.go:130] >     "conditions": [
	I1217 10:39:57.723345 2968376 command_runner.go:130] >       {
	I1217 10:39:57.723348 2968376 command_runner.go:130] >         "message": "",
	I1217 10:39:57.723352 2968376 command_runner.go:130] >         "reason": "",
	I1217 10:39:57.723356 2968376 command_runner.go:130] >         "status": true,
	I1217 10:39:57.723361 2968376 command_runner.go:130] >         "type": "RuntimeReady"
	I1217 10:39:57.723364 2968376 command_runner.go:130] >       },
	I1217 10:39:57.723367 2968376 command_runner.go:130] >       {
	I1217 10:39:57.723373 2968376 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1217 10:39:57.723378 2968376 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1217 10:39:57.723382 2968376 command_runner.go:130] >         "status": false,
	I1217 10:39:57.723386 2968376 command_runner.go:130] >         "type": "NetworkReady"
	I1217 10:39:57.723389 2968376 command_runner.go:130] >       },
	I1217 10:39:57.723391 2968376 command_runner.go:130] >       {
	I1217 10:39:57.723414 2968376 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1217 10:39:57.723421 2968376 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1217 10:39:57.723426 2968376 command_runner.go:130] >         "status": false,
	I1217 10:39:57.723432 2968376 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1217 10:39:57.723434 2968376 command_runner.go:130] >       }
	I1217 10:39:57.723437 2968376 command_runner.go:130] >     ]
	I1217 10:39:57.723440 2968376 command_runner.go:130] >   }
	I1217 10:39:57.723442 2968376 command_runner.go:130] > }
	I1217 10:39:57.726093 2968376 cni.go:84] Creating CNI manager for ""
	I1217 10:39:57.726119 2968376 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 10:39:57.726139 2968376 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 10:39:57.726166 2968376 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-232588 NodeName:functional-232588 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 10:39:57.726283 2968376 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-232588"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 10:39:57.726359 2968376 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 10:39:57.733320 2968376 command_runner.go:130] > kubeadm
	I1217 10:39:57.733342 2968376 command_runner.go:130] > kubectl
	I1217 10:39:57.733347 2968376 command_runner.go:130] > kubelet
	I1217 10:39:57.734253 2968376 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 10:39:57.734351 2968376 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 10:39:57.741900 2968376 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1217 10:39:57.754718 2968376 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 10:39:57.767131 2968376 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1217 10:39:57.780328 2968376 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 10:39:57.783968 2968376 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1217 10:39:57.784263 2968376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 10:39:57.891500 2968376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 10:39:58.252332 2968376 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588 for IP: 192.168.49.2
	I1217 10:39:58.252409 2968376 certs.go:195] generating shared ca certs ...
	I1217 10:39:58.252461 2968376 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:39:58.252670 2968376 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 10:39:58.252752 2968376 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 10:39:58.252788 2968376 certs.go:257] generating profile certs ...
	I1217 10:39:58.252943 2968376 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.key
	I1217 10:39:58.253053 2968376 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key.a39919a0
	I1217 10:39:58.253133 2968376 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key
	I1217 10:39:58.253172 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 10:39:58.253214 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 10:39:58.253260 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 10:39:58.253294 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 10:39:58.253341 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 10:39:58.253377 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 10:39:58.253421 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 10:39:58.253456 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 10:39:58.253577 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 10:39:58.253658 2968376 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 10:39:58.253688 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 10:39:58.253756 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 10:39:58.253819 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 10:39:58.253883 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 10:39:58.253975 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 10:39:58.254044 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.254093 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem -> /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.254126 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.254782 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 10:39:58.276977 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 10:39:58.300224 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 10:39:58.319429 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 10:39:58.338203 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 10:39:58.355898 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 10:39:58.373473 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 10:39:58.391528 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 10:39:58.408858 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 10:39:58.426819 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 10:39:58.444926 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 10:39:58.462979 2968376 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 10:39:58.476114 2968376 ssh_runner.go:195] Run: openssl version
	I1217 10:39:58.483093 2968376 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 10:39:58.483240 2968376 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.490661 2968376 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 10:39:58.498193 2968376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.502204 2968376 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.502289 2968376 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.502352 2968376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.543361 2968376 command_runner.go:130] > b5213941
	I1217 10:39:58.543894 2968376 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 10:39:58.551548 2968376 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.559110 2968376 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 10:39:58.567064 2968376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.570982 2968376 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.571071 2968376 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.571149 2968376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.611772 2968376 command_runner.go:130] > 51391683
	I1217 10:39:58.612217 2968376 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 10:39:58.619901 2968376 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.627496 2968376 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 10:39:58.635170 2968376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.639161 2968376 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.639286 2968376 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.639343 2968376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.679963 2968376 command_runner.go:130] > 3ec20f2e
	I1217 10:39:58.680491 2968376 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 10:39:58.687873 2968376 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 10:39:58.691452 2968376 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 10:39:58.691483 2968376 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1217 10:39:58.691491 2968376 command_runner.go:130] > Device: 259,1	Inode: 3648630     Links: 1
	I1217 10:39:58.691498 2968376 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 10:39:58.691503 2968376 command_runner.go:130] > Access: 2025-12-17 10:35:51.067485305 +0000
	I1217 10:39:58.691508 2968376 command_runner.go:130] > Modify: 2025-12-17 10:31:46.445154139 +0000
	I1217 10:39:58.691513 2968376 command_runner.go:130] > Change: 2025-12-17 10:31:46.445154139 +0000
	I1217 10:39:58.691519 2968376 command_runner.go:130] >  Birth: 2025-12-17 10:31:46.445154139 +0000
	I1217 10:39:58.691792 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 10:39:58.732576 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.733078 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 10:39:58.773416 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.773947 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 10:39:58.814511 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.815058 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 10:39:58.855809 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.856437 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 10:39:58.897493 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.897637 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 10:39:58.937941 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.938362 2968376 kubeadm.go:401] StartCluster: {Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:39:58.938478 2968376 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 10:39:58.938558 2968376 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 10:39:58.967095 2968376 cri.go:89] found id: ""
	I1217 10:39:58.967172 2968376 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 10:39:58.974207 2968376 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1217 10:39:58.974232 2968376 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1217 10:39:58.974239 2968376 command_runner.go:130] > /var/lib/minikube/etcd:
	I1217 10:39:58.975124 2968376 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 10:39:58.975142 2968376 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 10:39:58.975194 2968376 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 10:39:58.982722 2968376 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 10:39:58.983159 2968376 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-232588" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:39:58.983280 2968376 kubeconfig.go:62] /home/jenkins/minikube-integration/22182-2922712/kubeconfig needs updating (will repair): [kubeconfig missing "functional-232588" cluster setting kubeconfig missing "functional-232588" context setting]
	I1217 10:39:58.983551 2968376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:39:58.984002 2968376 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:39:58.984156 2968376 kapi.go:59] client config for functional-232588: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt", KeyFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.key", CAFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 10:39:58.984706 2968376 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 10:39:58.984730 2968376 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 10:39:58.984737 2968376 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 10:39:58.984745 2968376 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 10:39:58.984756 2968376 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 10:39:58.984794 2968376 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 10:39:58.985054 2968376 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 10:39:58.992764 2968376 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 10:39:58.992810 2968376 kubeadm.go:602] duration metric: took 17.660629ms to restartPrimaryControlPlane
	I1217 10:39:58.992820 2968376 kubeadm.go:403] duration metric: took 54.467316ms to StartCluster
	I1217 10:39:58.992834 2968376 settings.go:142] acquiring lock: {Name:mkbe14d68dd5ff3fa1749157c50305d115f5a24d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:39:58.992909 2968376 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:39:58.993526 2968376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:39:58.993746 2968376 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 10:39:58.994170 2968376 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 10:39:58.994219 2968376 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 10:39:58.994288 2968376 addons.go:70] Setting storage-provisioner=true in profile "functional-232588"
	I1217 10:39:58.994301 2968376 addons.go:239] Setting addon storage-provisioner=true in "functional-232588"
	I1217 10:39:58.994329 2968376 host.go:66] Checking if "functional-232588" exists ...
	I1217 10:39:58.994354 2968376 addons.go:70] Setting default-storageclass=true in profile "functional-232588"
	I1217 10:39:58.994416 2968376 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-232588"
	I1217 10:39:58.994775 2968376 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:39:58.994809 2968376 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:39:59.000060 2968376 out.go:179] * Verifying Kubernetes components...
	I1217 10:39:59.002988 2968376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 10:39:59.030107 2968376 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:39:59.030278 2968376 kapi.go:59] client config for functional-232588: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt", KeyFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.key", CAFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 10:39:59.030548 2968376 addons.go:239] Setting addon default-storageclass=true in "functional-232588"
	I1217 10:39:59.030583 2968376 host.go:66] Checking if "functional-232588" exists ...
	I1217 10:39:59.030999 2968376 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:39:59.046619 2968376 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 10:39:59.049547 2968376 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:39:59.049578 2968376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 10:39:59.049652 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:59.071122 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:59.078111 2968376 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 10:39:59.078138 2968376 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 10:39:59.078204 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:59.106268 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:59.210035 2968376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 10:39:59.247804 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:39:59.250104 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:00.029975 2968376 node_ready.go:35] waiting up to 6m0s for node "functional-232588" to be "Ready" ...
	I1217 10:40:00.030121 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:00.030183 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:00.030443 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.030485 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.030522 2968376 retry.go:31] will retry after 293.620925ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.030561 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.030575 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.030582 2968376 retry.go:31] will retry after 156.365506ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.030650 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:00.188354 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:00.324847 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:00.436532 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.436662 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.436836 2968376 retry.go:31] will retry after 279.814099ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.516954 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.518501 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.518555 2968376 retry.go:31] will retry after 262.10287ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.531577 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:00.531724 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:00.533353 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 10:40:00.717812 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:00.781511 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:00.801403 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.801643 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.801671 2968376 retry.go:31] will retry after 799.844048ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.868602 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.868642 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.868698 2968376 retry.go:31] will retry after 554.70169ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:01.031171 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:01.031268 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:01.031636 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:01.424206 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:01.486829 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:01.486884 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:01.486903 2968376 retry.go:31] will retry after 534.910165ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:01.531036 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:01.531190 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:01.531514 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:01.601938 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:01.666361 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:01.666415 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:01.666435 2968376 retry.go:31] will retry after 494.63938ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:02.022963 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:02.030812 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:02.030945 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:02.031372 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:02.031439 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:02.093352 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:02.093469 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:02.093495 2968376 retry.go:31] will retry after 1.147395482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:02.161756 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:02.224785 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:02.224835 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:02.224873 2968376 retry.go:31] will retry after 722.380129ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:02.530243 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:02.530335 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:02.530682 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:02.948277 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:03.019220 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:03.023774 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:03.023820 2968376 retry.go:31] will retry after 1.527910453s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:03.031105 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:03.031182 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:03.031525 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:03.241898 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:03.304153 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:03.304205 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:03.304227 2968376 retry.go:31] will retry after 2.808262652s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:03.530353 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:03.530425 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:03.530767 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:04.030262 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:04.030340 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:04.030662 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:04.530190 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:04.530267 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:04.530613 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:04.530682 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:04.552783 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:04.614277 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:04.618634 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:04.618671 2968376 retry.go:31] will retry after 1.686088172s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:05.031243 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:05.031319 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:05.031611 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:05.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:05.530314 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:05.530636 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:06.030216 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:06.030295 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:06.030584 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:06.113005 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:06.174987 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:06.175028 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:06.175048 2968376 retry.go:31] will retry after 2.620064864s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:06.305352 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:06.366722 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:06.366771 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:06.366790 2968376 retry.go:31] will retry after 6.20410258s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:06.531098 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:06.531170 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:06.531510 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:06.531566 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:07.030285 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:07.030361 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:07.030703 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:07.530195 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:07.530269 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:07.530540 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:08.030245 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:08.030326 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:08.030668 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:08.530335 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:08.530413 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:08.530732 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:08.796304 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:08.853426 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:08.857034 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:08.857067 2968376 retry.go:31] will retry after 3.174722269s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:09.030586 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:09.030666 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:09.031008 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:09.031064 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:09.530804 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:09.530879 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:09.531204 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:10.031140 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:10.031218 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:10.031521 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:10.530229 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:10.530312 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:10.530588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:11.030272 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:11.030347 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:11.030674 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:11.530355 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:11.530450 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:11.530745 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:11.530788 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:12.030183 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:12.030259 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:12.030568 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:12.032754 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:12.104534 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:12.104594 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:12.104617 2968376 retry.go:31] will retry after 7.427014064s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:12.531116 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:12.531194 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:12.531510 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:12.571824 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:12.627783 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:12.631439 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:12.631473 2968376 retry.go:31] will retry after 5.673499761s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:13.031007 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:13.031079 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:13.031447 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:13.530133 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:13.530207 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:13.530473 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:14.030881 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:14.030963 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:14.031294 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:14.031348 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:14.531063 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:14.531139 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:14.531511 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:15.030415 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:15.030505 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:15.030865 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:15.530246 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:15.530327 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:15.530615 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:16.030257 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:16.030335 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:16.030683 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:16.530343 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:16.530412 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:16.530735 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:16.530792 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:17.030348 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:17.030438 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:17.030746 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:17.530427 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:17.530508 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:17.530854 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:18.031138 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:18.031239 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:18.031524 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:18.306153 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:18.363523 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:18.367149 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:18.367184 2968376 retry.go:31] will retry after 11.676089788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:18.530483 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:18.530628 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:18.530998 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:18.531054 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:19.031060 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:19.031138 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:19.031447 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:19.530144 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:19.530217 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:19.530501 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:19.532780 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:19.596086 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:19.596134 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:19.596153 2968376 retry.go:31] will retry after 6.09625298s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:20.031102 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:20.031251 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:20.031747 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:20.530743 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:20.530896 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:20.531474 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:20.531549 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:21.030954 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:21.031034 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:21.031324 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:21.531097 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:21.531170 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:21.531522 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:22.030145 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:22.030232 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:22.030617 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:22.530952 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:22.531023 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:22.531286 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:23.031049 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:23.031121 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:23.031488 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:23.031552 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:23.531151 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:23.531233 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:23.531594 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:24.030205 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:24.030271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:24.030618 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:24.530297 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:24.530374 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:24.530704 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:25.030556 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:25.030634 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:25.031013 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:25.530898 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:25.530990 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:25.531308 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:25.531351 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:25.692701 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:25.761074 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:25.761116 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:25.761134 2968376 retry.go:31] will retry after 8.308022173s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:26.030656 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:26.030736 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:26.031050 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:26.530816 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:26.530887 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:26.531227 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:27.030620 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:27.030689 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:27.030975 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:27.530810 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:27.530882 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:27.531225 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:28.031037 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:28.031121 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:28.031512 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:28.031588 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:28.530251 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:28.530319 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:28.530583 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:29.030525 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:29.030614 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:29.031053 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:29.530775 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:29.530846 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:29.531189 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:30.032544 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:30.032629 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:30.032970 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:30.033031 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:30.044190 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:30.141158 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:30.141207 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:30.141228 2968376 retry.go:31] will retry after 21.251088353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:30.530770 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:30.530848 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:30.531184 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:31.031023 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:31.031097 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:31.031429 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:31.530162 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:31.530338 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:31.530687 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:32.030318 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:32.030410 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:32.030863 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:32.530571 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:32.530648 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:32.531098 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:32.531174 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:33.030920 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:33.031010 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:33.031359 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:33.531147 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:33.531219 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:33.531570 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:34.030334 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:34.030418 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:34.030775 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:34.070045 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:34.128651 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:34.132259 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:34.132293 2968376 retry.go:31] will retry after 23.004999937s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:34.530392 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:34.530466 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:34.530735 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:35.030855 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:35.030938 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:35.031252 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:35.031308 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:35.530763 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:35.530834 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:35.531181 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:36.030980 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:36.031106 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:36.031458 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:36.530826 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:36.530905 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:36.531257 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:37.031180 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:37.031261 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:37.031662 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:37.031754 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:37.530206 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:37.530276 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:37.530649 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:38.030423 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:38.030503 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:38.030854 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:38.530587 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:38.530659 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:38.531005 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:39.030844 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:39.030924 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:39.031203 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:39.531010 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:39.531096 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:39.531446 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:39.531521 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:40.031145 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:40.031231 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:40.031658 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:40.530343 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:40.530420 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:40.530707 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:41.030992 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:41.031064 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:41.031409 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:41.531176 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:41.531252 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:41.531592 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:41.531649 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:42.030335 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:42.030418 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:42.030713 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:42.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:42.530293 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:42.530634 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:43.030224 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:43.030309 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:43.030694 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:43.530392 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:43.530468 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:43.530795 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:44.030259 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:44.030339 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:44.030666 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:44.030720 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:44.530388 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:44.530467 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:44.530803 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:45.030897 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:45.032736 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:45.034090 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 10:40:45.530857 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:45.530936 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:45.531262 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:46.031009 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:46.031081 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:46.031343 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:46.031380 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:46.531073 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:46.531152 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:46.531521 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:47.030170 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:47.030255 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:47.030602 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:47.530303 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:47.530374 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:47.530644 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:48.030323 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:48.030406 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:48.030744 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:48.530500 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:48.530605 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:48.530966 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:48.531023 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:49.030795 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:49.030871 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:49.031172 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:49.530860 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:49.530935 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:49.531267 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:50.031129 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:50.031208 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:50.031548 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:50.530224 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:50.530303 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:50.530574 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:51.030327 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:51.030401 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:51.030749 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:51.030806 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:51.393321 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:51.454332 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:51.458316 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:51.458350 2968376 retry.go:31] will retry after 15.302727777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:51.530571 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:51.530643 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:51.530966 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:52.030247 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:52.030332 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:52.030623 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:52.530219 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:52.530312 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:52.530691 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:53.030289 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:53.030364 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:53.030698 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:53.530380 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:53.530457 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:53.530780 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:53.530833 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:54.030549 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:54.030652 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:54.030947 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:54.530639 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:54.530716 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:54.531043 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:55.030934 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:55.031013 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:55.031455 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:55.531099 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:55.531193 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:55.531510 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:55.531578 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:56.030320 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:56.030398 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:56.030700 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:56.530201 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:56.530273 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:56.530535 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:57.030303 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:57.030373 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:57.030719 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:57.138000 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:57.193212 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:57.197444 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:57.197478 2968376 retry.go:31] will retry after 20.170499035s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:57.530886 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:57.530963 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:57.531316 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:58.031030 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:58.031101 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:58.031459 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:58.031521 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:58.530185 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:58.530279 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:58.530591 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:59.030603 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:59.030673 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:59.031011 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:59.530181 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:59.530254 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:59.530556 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:00.031130 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:00.031217 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:00.031532 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:00.031582 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:00.530232 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:00.530306 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:00.530652 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:01.030118 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:01.030193 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:01.030459 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:01.530122 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:01.530201 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:01.530574 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:02.030319 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:02.030407 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:02.030755 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:02.530195 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:02.530276 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:02.530558 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:02.530607 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:03.030232 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:03.030305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:03.030654 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:03.530352 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:03.530433 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:03.530775 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:04.030460 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:04.030552 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:04.030847 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:04.530572 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:04.530659 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:04.530971 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:04.531027 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:05.031015 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:05.031091 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:05.031381 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:05.531137 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:05.531210 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:05.531480 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:06.030204 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:06.030287 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:06.030672 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:06.530230 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:06.530312 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:06.530661 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:06.762229 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:41:06.820073 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:06.820109 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:41:06.820128 2968376 retry.go:31] will retry after 35.040877283s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:41:07.030604 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:07.030693 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:07.030967 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:07.031017 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:07.530709 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:07.530791 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:07.531216 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:08.030859 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:08.030956 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:08.031280 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:08.531028 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:08.531110 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:08.531372 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:09.030137 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:09.030210 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:09.030518 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:09.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:09.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:09.530639 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:09.530700 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:10.030448 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:10.030530 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:10.030820 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:10.530483 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:10.530577 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:10.530870 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:11.030249 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:11.030346 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:11.030673 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:11.530364 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:11.530439 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:11.530704 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:11.530760 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:12.030256 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:12.030329 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:12.030660 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:12.530205 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:12.530277 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:12.530608 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:13.030312 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:13.030400 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:13.030672 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:13.530341 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:13.530415 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:13.530818 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:13.530880 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:14.030265 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:14.030338 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:14.030678 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:14.530384 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:14.530453 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:14.530771 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:15.030787 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:15.030877 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:15.031291 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:15.531114 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:15.531196 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:15.531528 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:15.531590 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:16.030220 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:16.030296 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:16.030573 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:16.530300 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:16.530383 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:16.530739 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:17.030458 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:17.030539 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:17.030882 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:17.368346 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:41:17.428304 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:17.431873 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:41:17.431904 2968376 retry.go:31] will retry after 38.363968078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:41:17.531154 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:17.531231 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:17.531502 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:18.030234 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:18.030352 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:18.030774 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:18.030859 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:18.530515 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:18.530607 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:18.530942 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:19.030903 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:19.030980 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:19.031301 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:19.530780 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:19.530855 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:19.531233 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:20.031004 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:20.031081 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:20.031456 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:20.031515 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:20.530158 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:20.530242 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:20.530554 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:21.030247 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:21.030339 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:21.030668 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:21.530378 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:21.530474 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:21.530782 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:22.030443 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:22.030541 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:22.030864 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:22.530263 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:22.530337 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:22.530672 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:22.530725 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:23.030389 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:23.030466 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:23.030819 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:23.530513 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:23.530591 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:23.530877 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:24.030399 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:24.030471 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:24.030823 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:24.530238 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:24.530307 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:24.530641 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:25.030797 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:25.030872 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:25.031158 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:25.031215 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:25.530943 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:25.531023 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:25.531343 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:26.031163 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:26.031243 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:26.031563 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:26.530221 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:26.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:26.530613 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:27.030270 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:27.030340 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:27.030646 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:27.530262 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:27.530344 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:27.530672 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:27.530735 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:28.030374 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:28.030450 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:28.030789 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:28.530216 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:28.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:28.530634 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:29.030406 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:29.030498 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:29.030839 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:29.530201 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:29.530270 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:29.530587 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:30.030598 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:30.030723 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:30.031102 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:30.031168 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:30.530947 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:30.531019 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:30.531339 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:31.031114 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:31.031177 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:31.031431 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:31.531219 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:31.531303 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:31.531630 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:32.030207 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:32.030291 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:32.030641 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:32.530182 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:32.530258 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:32.530539 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:32.530590 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:33.030266 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:33.030347 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:33.030749 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:33.530421 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:33.530501 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:33.530853 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:34.030212 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:34.030287 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:34.030655 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:34.530281 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:34.530361 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:34.530710 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:34.530814 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:35.030611 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:35.030685 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:35.031034 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:35.530333 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:35.530402 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:35.530717 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:36.030300 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:36.030388 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:36.030750 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:36.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:36.530286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:36.530608 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:37.030333 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:37.030420 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:37.030757 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:37.030808 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:37.530515 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:37.530586 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:37.530947 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:38.030773 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:38.030848 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:38.031196 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:38.530440 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:38.530514 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:38.530796 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:39.030742 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:39.030829 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:39.031155 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:39.031225 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:39.530944 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:39.531015 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:39.531346 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:40.031118 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:40.031205 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:40.031497 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:40.530158 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:40.530248 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:40.530609 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:41.030336 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:41.030425 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:41.030757 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:41.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:41.530274 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:41.530533 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:41.530571 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:41.862178 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:41:41.923706 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:41.923759 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:41.923872 2968376 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 10:41:42.031033 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:42.031113 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:42.031454 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:42.530179 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:42.530261 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:42.530593 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:43.030183 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:43.030252 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:43.030559 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:43.530235 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:43.530318 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:43.530640 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:43.530702 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:44.030269 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:44.030350 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:44.030675 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:44.530279 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:44.530349 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:44.530638 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:45.030563 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:45.030653 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:45.031039 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:45.530936 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:45.531014 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:45.531379 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:45.531437 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:46.030694 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:46.030768 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:46.031031 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:46.530498 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:46.530586 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:46.530955 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:47.030756 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:47.030830 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:47.031181 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:47.530955 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:47.531021 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:47.531344 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:48.031149 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:48.031228 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:48.031596 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:48.031656 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:48.530229 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:48.530311 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:48.530650 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:49.030360 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:49.030435 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:49.030716 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:49.530373 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:49.530448 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:49.530795 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:50.030856 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:50.030945 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:50.031309 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:50.531043 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:50.531111 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:50.531380 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:50.531421 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:51.030173 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:51.030254 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:51.030586 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:51.530279 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:51.530362 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:51.530687 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:52.030399 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:52.030478 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:52.030876 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:52.530578 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:52.530651 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:52.531025 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:53.030854 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:53.030938 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:53.031290 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:53.031364 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:53.531095 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:53.531182 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:53.531545 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:54.030257 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:54.030339 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:54.030711 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:54.530208 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:54.530286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:54.534934 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 10:41:55.030911 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:55.031006 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:55.031280 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:55.530658 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:55.530757 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:55.531092 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:55.531146 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:55.796501 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:41:55.858122 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:55.858175 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:55.858259 2968376 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 10:41:55.863014 2968376 out.go:179] * Enabled addons: 
	I1217 10:41:55.865747 2968376 addons.go:530] duration metric: took 1m56.871522842s for enable addons: enabled=[]
	I1217 10:41:56.030483 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:56.030561 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:56.030907 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:56.530592 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:56.530668 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:56.530973 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:57.030256 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:57.030336 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:57.030717 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:57.530234 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:57.530308 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:57.530641 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:58.033611 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:58.033711 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:58.033996 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:58.034053 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:58.530276 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:58.530381 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:58.530759 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:59.030773 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:59.030845 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:59.031207 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:59.531008 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:59.531115 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:59.531404 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:00.030325 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:00.030471 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:00.030856 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:00.530785 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:00.530901 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:00.531226 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:00.531288 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:01.030976 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:01.031043 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:01.031299 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:01.531109 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:01.531187 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:01.531522 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:02.030231 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:02.030334 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:02.030695 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:02.530421 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:02.530490 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:02.530829 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:03.030527 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:03.030623 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:03.030985 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:03.031044 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:03.530805 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:03.530890 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:03.531241 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:04.030644 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:04.030719 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:04.031014 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:04.530743 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:04.530821 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:04.531126 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:05.030982 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:05.031061 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:05.031449 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:05.031509 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:05.530161 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:05.530231 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:05.530503 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:06.030268 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:06.030373 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:06.030811 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:06.530502 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:06.530577 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:06.530933 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:07.030646 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:07.030722 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:07.031021 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:07.530377 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:07.530455 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:07.530792 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:07.530847 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:08.030509 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:08.030589 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:08.030943 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:08.530625 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:08.530698 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:08.530961 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:09.030865 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:09.030938 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:09.031271 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:09.531064 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:09.531145 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:09.531546 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:09.531604 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:10.030184 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:10.030265 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:10.030604 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:10.530301 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:10.530388 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:10.530737 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:11.030319 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:11.030395 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:11.030731 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:11.530208 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:11.530286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:11.530559 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:12.030254 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:12.030339 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:12.030671 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:12.030728 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:12.530216 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:12.530298 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:12.530650 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:13.030199 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:13.030289 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:13.030609 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:13.530174 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:13.530251 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:13.530593 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:14.030325 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:14.030403 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:14.030742 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:14.030815 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:14.530197 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:14.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:14.530595 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:15.030677 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:15.030769 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:15.031176 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:15.530953 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:15.531037 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:15.531372 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:16.030632 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:16.030705 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:16.031041 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:16.031095 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:16.530824 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:16.530899 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:16.531227 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:17.031078 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:17.031158 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:17.031507 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:17.530212 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:17.530279 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:17.530603 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:18.030263 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:18.030383 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:18.030909 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:18.530209 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:18.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:18.530632 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:18.530733 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:19.030297 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:19.030368 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:19.030628 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:19.530369 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:19.530456 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:19.530831 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:20.030743 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:20.030860 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:20.031293 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:20.531060 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:20.531143 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:20.531416 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:20.531464 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:21.030175 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:21.030250 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:21.030599 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:21.530297 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:21.530372 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:21.530710 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:22.030207 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:22.030288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:22.030595 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:22.530236 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:22.530323 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:22.530665 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:23.030258 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:23.030337 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:23.030699 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:23.030756 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:23.530408 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:23.530481 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:23.530771 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:24.030482 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:24.030556 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:24.030886 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:24.530220 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:24.530300 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:24.530624 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:25.030607 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:25.030684 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:25.030955 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:25.030997 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:25.530792 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:25.530868 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:25.531224 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:26.031041 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:26.031118 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:26.031467 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:26.530812 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:26.530894 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:26.531163 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:27.030944 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:27.031023 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:27.031350 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:27.031410 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:27.531121 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:27.531199 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:27.531551 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:28.030233 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:28.030315 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:28.030645 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:28.530224 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:28.530306 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:28.530690 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:29.030400 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:29.030474 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:29.030789 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:29.530188 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:29.530261 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:29.530523 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:29.530575 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:30.030527 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:30.030608 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:30.030914 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:30.530157 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:30.530235 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:30.530505 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:31.030212 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:31.030285 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:31.030567 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:31.530218 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:31.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:31.530647 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:31.530702 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:32.030406 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:32.030487 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:32.030812 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:32.530487 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:32.530568 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:32.530890 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:33.030273 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:33.030372 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:33.030696 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:33.530243 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:33.530324 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:33.530662 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:34.030960 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:34.031034 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:34.031331 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:34.031398 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:34.531153 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:34.531229 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:34.531528 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:35.031148 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:35.031227 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:35.031548 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:35.530192 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:35.530297 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:35.530620 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:36.030260 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:36.030343 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:36.030695 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:36.530255 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:36.530338 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:36.530702 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:36.530761 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:37.030424 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:37.030507 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:37.030895 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:37.530587 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:37.530660 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:37.530982 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:38.030793 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:38.030877 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:38.031209 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:38.530652 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:38.530746 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:38.531014 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:38.531064 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:39.030920 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:39.030999 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:39.031358 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:39.530863 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:39.530944 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:39.531269 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:40.033571 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:40.033646 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:40.033999 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:40.530772 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:40.530895 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:40.531207 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:40.531255 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:41.030902 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:41.030976 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:41.031290 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:41.530836 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:41.530913 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:41.531177 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:42.031028 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:42.031104 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:42.031506 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:42.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:42.530290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:42.530602 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:43.030257 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:43.030326 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:43.030592 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:43.030635 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:43.530202 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:43.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:43.530608 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:44.030263 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:44.030345 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:44.030692 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:44.530382 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:44.530453 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:44.530758 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:45.030634 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:45.030711 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:45.031045 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:45.031092 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:45.530980 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:45.531053 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:45.531405 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:46.030984 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:46.031075 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:46.031347 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:46.531101 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:46.531172 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:46.531490 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:47.030218 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:47.030298 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:47.030674 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:47.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:47.530250 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:47.530516 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:47.530556 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:48.030265 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:48.030342 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:48.030699 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:48.530227 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:48.530301 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:48.530655 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:49.030362 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:49.030433 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:49.030712 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:49.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:49.530274 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:49.530611 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:49.530667 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:50.030450 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:50.030536 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:50.030902 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:50.530566 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:50.530643 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:50.530924 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:51.030624 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:51.030697 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:51.031040 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:51.530799 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:51.530874 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:51.531195 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:51.531260 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:52.030967 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:52.031041 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:52.031382 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:52.530130 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:52.530238 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:52.530576 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:53.030280 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:53.030358 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:53.030697 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:53.530178 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:53.530255 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:53.530583 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:54.030277 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:54.030377 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:54.030696 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:54.030750 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:54.530411 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:54.530489 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:54.530806 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:55.030697 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:55.030769 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:55.031047 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:55.530468 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:55.530547 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:55.530914 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:56.030257 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:56.030332 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:56.030675 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:56.530365 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:56.530439 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:56.530709 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:56.530760 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:57.030475 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:57.030547 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:57.030868 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:57.530563 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:57.530633 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:57.530984 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:58.030672 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:58.030747 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:58.031048 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:58.530818 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:58.530891 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:58.531173 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:58.531217 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:59.030892 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:59.030968 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:59.031306 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:59.531057 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:59.531132 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:59.531418 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:00.031208 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:00.031315 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:00.031840 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:00.530199 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:00.530272 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:00.530592 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:01.030181 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:01.030256 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:01.030519 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:01.030564 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:01.530209 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:01.530280 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:01.530610 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:02.030330 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:02.030414 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:02.030762 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:02.530189 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:02.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:02.530545 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:03.030260 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:03.030353 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:03.030693 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:03.030751 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:03.530210 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:03.530286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:03.530616 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:04.030191 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:04.030264 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:04.030560 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:04.530232 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:04.530306 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:04.530651 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:05.030671 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:05.030756 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:05.031092 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:05.031143 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:05.530868 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:05.530936 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:05.531271 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:06.031105 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:06.031190 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:06.031557 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:06.530208 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:06.530279 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:06.530617 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:07.030293 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:07.030367 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:07.030644 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:07.530306 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:07.530387 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:07.530723 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:07.530782 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:08.030495 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:08.030574 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:08.030934 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:08.530198 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:08.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:08.530588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:09.030617 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:09.030710 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:09.031007 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:09.530235 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:09.530313 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:09.530669 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:10.030528 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:10.030602 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:10.030907 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:10.030956 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:10.530627 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:10.530703 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:10.531097 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:11.030974 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:11.031061 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:11.031452 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:11.530139 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:11.530212 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:11.530499 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:12.030241 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:12.030318 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:12.030668 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:12.530367 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:12.530446 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:12.530785 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:12.530838 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:13.030512 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:13.030590 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:13.030926 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:13.530231 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:13.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:13.530675 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:14.030433 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:14.030532 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:14.030898 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:14.530191 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:14.530265 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:14.530525 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:15.030561 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:15.030642 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:15.031035 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:15.031108 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:15.530771 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:15.530851 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:15.531186 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:16.030941 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:16.031058 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:16.031367 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:16.531127 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:16.531204 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:16.531551 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:17.030268 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:17.030359 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:17.030730 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:17.530432 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:17.530503 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:17.530762 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:17.530802 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:18.030287 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:18.030373 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:18.030726 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:18.530417 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:18.530491 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:18.530823 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:19.030615 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:19.030686 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:19.030957 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:19.530751 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:19.530822 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:19.531145 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:19.531219 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:20.030996 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:20.031081 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:20.031466 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:20.530134 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:20.530218 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:20.530480 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:21.030224 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:21.030315 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:21.030716 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:21.530405 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:21.530495 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:21.530849 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:22.030215 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:22.030290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:22.030563 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:22.030612 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:22.530260 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:22.530336 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:22.530660 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:23.030248 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:23.030322 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:23.030668 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:23.530351 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:23.530432 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:23.530727 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:24.030221 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:24.030298 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:24.030639 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:24.030700 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:24.530199 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:24.530274 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:24.530592 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:25.030528 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:25.030601 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:25.030897 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:25.530568 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:25.530644 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:25.531019 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:26.030843 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:26.030932 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:26.031265 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:26.031321 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:26.531018 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:26.531091 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:26.531351 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:27.031180 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:27.031252 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:27.031581 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:27.530252 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:27.530331 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:27.530667 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:28.030211 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:28.030284 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:28.030564 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:28.530284 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:28.530361 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:28.530659 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:28.530707 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:29.030650 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:29.030721 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:29.031052 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:29.530749 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:29.530823 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:29.531149 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:30.031046 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:30.031137 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:30.031519 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:30.530216 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:30.530313 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:30.530684 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:30.530743 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:31.030206 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:31.030288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:31.030560 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:31.530234 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:31.530321 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:31.530704 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:32.030432 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:32.030511 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:32.030861 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:32.530557 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:32.530633 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:32.530986 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:32.531075 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:33.030858 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:33.030935 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:33.031277 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:33.531109 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:33.531187 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:33.531577 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:34.030306 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:34.030382 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:34.030753 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:34.530223 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:34.530319 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:34.530708 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:35.030567 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:35.030667 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:35.031054 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:35.031114 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:35.530355 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:35.530425 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:35.530748 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:36.030259 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:36.030360 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:36.030744 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:36.530439 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:36.530514 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:36.530839 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:37.030151 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:37.030234 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:37.030595 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:37.530363 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:37.530442 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:37.530778 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:37.530853 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:38.030262 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:38.030348 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:38.030702 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:38.530390 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:38.530461 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:38.530733 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:39.030687 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:39.030760 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:39.031111 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:39.530923 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:39.531001 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:39.531339 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:39.531397 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:40.030955 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:40.031030 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:40.031319 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:40.531063 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:40.531139 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:40.531495 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:41.031163 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:41.031238 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:41.031591 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:41.530249 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:41.530323 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:41.530587 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:42.030362 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:42.030453 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:42.030854 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:42.030924 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:42.530577 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:42.530657 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:42.531023 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:43.030790 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:43.030866 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:43.031190 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:43.530930 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:43.531021 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:43.531357 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:44.031028 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:44.031107 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:44.031450 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:44.031512 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:44.530164 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:44.530233 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:44.530544 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:45.031170 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:45.031261 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:45.031590 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:45.530211 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:45.530293 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:45.530682 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:46.030354 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:46.030422 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:46.030698 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:46.530232 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:46.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:46.530634 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:46.530696 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:47.030366 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:47.030444 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:47.030742 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:47.530404 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:47.530478 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:47.530752 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:48.030496 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:48.030575 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:48.030882 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:48.530214 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:48.530292 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:48.530633 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:49.030376 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:49.030444 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:49.030775 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:49.030832 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:49.530474 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:49.530545 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:49.530877 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:50.030914 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:50.030991 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:50.031360 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:50.531113 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:50.531187 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:50.531458 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:51.030169 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:51.030240 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:51.030588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:51.530249 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:51.530328 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:51.530647 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:51.530704 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:52.030197 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:52.030269 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:52.030691 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:52.530433 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:52.530506 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:52.530821 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:53.030494 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:53.030577 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:53.030964 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:53.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:53.530257 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:53.530535 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:54.030277 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:54.030388 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:54.030914 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:54.030974 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:54.530212 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:54.530304 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:54.530629 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:55.030620 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:55.030692 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:55.030975 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:55.530238 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:55.530313 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:55.530627 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:56.030306 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:56.030401 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:56.030753 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:56.530440 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:56.530509 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:56.530781 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:56.530820 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:57.030493 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:57.030591 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:57.030923 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:57.530749 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:57.530825 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:57.531153 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:58.030917 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:58.030996 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:58.031309 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:58.530552 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:58.530658 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:58.531261 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:58.531332 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:59.031089 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:59.031172 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:59.031521 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:59.530216 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:59.530308 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:59.530593 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:00.030748 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:00.030831 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:00.031142 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:00.530904 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:00.530989 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:00.531375 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:00.531435 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:01.030988 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:01.031055 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:01.031330 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:01.531137 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:01.531217 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:01.531543 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:02.030239 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:02.030312 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:02.030660 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:02.530197 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:02.530277 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:02.530553 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:03.030250 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:03.030323 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:03.030669 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:03.030723 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:03.530375 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:03.530452 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:03.530800 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:04.030214 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:04.030295 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:04.030627 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:04.530317 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:04.530420 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:04.530765 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:05.030596 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:05.030677 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:05.031020 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:05.031084 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:05.530367 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:05.530441 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:05.530720 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:06.030267 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:06.030359 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:06.030720 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:06.530416 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:06.530489 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:06.530819 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:07.030188 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:07.030264 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:07.030539 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:07.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:07.530283 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:07.530594 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:07.530643 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:08.030372 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:08.030476 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:08.030889 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:08.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:08.530253 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:08.530521 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:09.031107 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:09.031182 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:09.031487 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:09.530173 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:09.530246 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:09.530588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:10.030187 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:10.030261 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:10.030583 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:10.030632 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:10.530212 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:10.530285 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:10.530616 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:11.030321 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:11.030401 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:11.030717 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:11.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:11.530260 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:11.530567 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:12.030265 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:12.030345 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:12.030692 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:12.030750 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:12.530410 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:12.530491 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:12.530831 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:13.030508 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:13.030583 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:13.030845 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:13.530215 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:13.530293 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:13.530600 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:14.030276 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:14.030355 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:14.030683 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:14.530187 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:14.530269 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:14.530570 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:14.530619 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:15.030634 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:15.030728 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:15.031132 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:15.530902 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:15.530978 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:15.531320 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:16.031133 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:16.031225 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:16.031608 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:16.530217 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:16.530290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:16.530631 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:16.530691 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:17.030347 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:17.030434 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:17.030771 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:17.530190 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:17.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:17.530547 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:18.030304 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:18.030396 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:18.030847 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:18.530228 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:18.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:18.530658 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:19.030405 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:19.030487 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:19.030753 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:19.030793 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:19.530210 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:19.530287 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:19.530630 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:20.030471 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:20.030552 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:20.030904 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:20.530566 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:20.530649 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:20.530928 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:21.030264 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:21.030341 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:21.030695 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:21.530387 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:21.530465 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:21.530798 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:21.530854 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:22.030489 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:22.030560 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:22.030836 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:22.530229 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:22.530311 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:22.530651 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:23.030359 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:23.030435 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:23.030790 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:23.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:23.530257 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:23.530538 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:24.030285 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:24.030362 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:24.030720 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:24.030784 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:24.530459 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:24.530561 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:24.530890 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:25.030748 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:25.030819 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:25.031092 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:25.530805 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:25.530886 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:25.531202 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:26.030981 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:26.031066 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:26.031447 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:26.031506 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:26.530164 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:26.530240 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:26.530509 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:27.030231 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:27.030322 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:27.030709 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:27.530233 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:27.530309 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:27.530655 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:28.030347 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:28.030421 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:28.030710 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:28.530223 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:28.530325 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:28.530633 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:28.530685 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:29.030667 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:29.030745 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:29.031082 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:29.530788 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:29.530865 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:29.531130 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:30.031110 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:30.031196 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:30.031505 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:30.530206 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:30.530283 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:30.530647 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:30.530712 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:31.030226 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:31.030304 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:31.030570 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:31.530239 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:31.530328 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:31.530675 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:32.030374 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:32.030452 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:32.030786 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:32.530472 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:32.530546 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:32.530828 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:32.530875 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:33.030256 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:33.030344 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:33.030721 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:33.530199 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:33.530277 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:33.530635 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:34.030369 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:34.030448 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:34.030741 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:34.530447 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:34.530523 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:34.530886 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:34.530946 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:35.030720 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:35.030802 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:35.031141 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:35.530926 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:35.530998 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:35.531261 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:36.031086 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:36.031162 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:36.031504 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:36.530199 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:36.530272 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:36.530613 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:37.030185 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:37.030264 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:37.030544 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:37.030595 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:37.530230 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:37.530306 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:37.530649 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:38.030265 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:38.030343 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:38.030718 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:38.530403 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:38.530475 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:38.530749 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:39.030746 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:39.030820 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:39.031148 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:39.031206 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:39.530917 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:39.530990 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:39.531311 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:40.031557 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:40.031645 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:40.032005 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:40.530747 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:40.530827 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:40.531128 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:41.030902 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:41.030984 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:41.031306 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:41.031363 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:41.531112 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:41.531189 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:41.531510 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:42.030251 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:42.030333 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:42.030713 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:42.530283 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:42.530359 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:42.530684 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:43.030191 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:43.030262 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:43.030522 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:43.530207 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:43.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:43.530656 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:43.530713 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:44.030378 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:44.030455 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:44.030782 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:44.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:44.530264 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:44.530529 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:45.030577 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:45.030660 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:45.030993 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:45.530696 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:45.530777 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:45.531097 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:45.531153 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:46.030857 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:46.030933 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:46.031262 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:46.530790 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:46.530867 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:46.531226 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:47.031053 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:47.031133 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:47.031466 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:47.530800 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:47.530875 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:47.531148 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:47.531197 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:48.030951 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:48.031025 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:48.031389 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:48.531201 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:48.531290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:48.531669 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:49.030366 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:49.030437 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:49.030705 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:49.530219 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:49.530300 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:49.530631 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:50.030617 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:50.030700 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:50.031089 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:50.031150 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:50.530889 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:50.530962 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:50.531299 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:51.031099 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:51.031173 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:51.031503 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:51.530225 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:51.530303 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:51.530635 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:52.030871 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:52.030945 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:52.031227 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:52.031267 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:52.531049 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:52.531125 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:52.531452 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:53.030183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:53.030272 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:53.030616 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:53.530338 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:53.530458 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:53.530734 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:54.030429 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:54.030506 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:54.030853 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:54.530381 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:54.530458 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:54.530812 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:54.530872 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:55.030902 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:55.030979 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:55.031278 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:55.531074 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:55.531160 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:55.531517 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:56.030253 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:56.030336 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:56.030686 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:56.530868 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:56.530948 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:56.531219 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:56.531259 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:57.031079 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:57.031159 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:57.031538 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:57.530232 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:57.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:57.530630 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:58.030200 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:58.030274 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:58.030548 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:58.530218 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:58.530297 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:58.530631 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:59.030407 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:59.030481 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:59.030824 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:59.030888 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:59.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:59.530255 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:59.530525 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:00.030580 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:00.030665 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:00.031061 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:00.536031 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:00.536130 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:00.536497 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:01.030384 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:01.030461 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:01.030805 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:01.530510 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:01.530586 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:01.531038 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:01.531093 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:02.030519 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:02.030596 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:02.030885 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:02.530601 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:02.530679 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:02.531024 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:03.030770 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:03.030845 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:03.031172 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:03.530923 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:03.531000 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:03.531348 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:03.531399 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:04.031188 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:04.031267 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:04.031578 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:04.530221 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:04.530301 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:04.530665 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:05.030434 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:05.030511 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:05.030794 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:05.530486 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:05.530562 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:05.530936 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:06.030547 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:06.030629 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:06.031023 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:06.031086 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:06.530803 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:06.530879 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:06.531191 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:07.031034 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:07.031122 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:07.031472 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:07.530868 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:07.530945 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:07.531250 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:08.031006 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:08.031085 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:08.031378 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:08.031422 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:08.530162 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:08.530246 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:08.530602 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:09.030388 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:09.030461 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:09.030757 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:09.530206 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:09.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:09.530546 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:10.031113 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:10.031190 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:10.031553 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:10.031610 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:10.530237 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:10.530307 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:10.530634 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:11.030979 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:11.031054 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:11.031384 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:11.531138 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:11.531212 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:11.531564 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:12.030178 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:12.030257 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:12.030588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:12.530188 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:12.530255 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:12.530534 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:12.530573 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:13.030280 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:13.030360 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:13.030766 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:13.530219 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:13.530304 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:13.530671 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:14.030210 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:14.030285 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:14.030552 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:14.530221 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:14.530293 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:14.530608 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:14.530657 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:15.030494 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:15.030575 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:15.030910 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:15.530437 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:15.530554 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:15.530900 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:16.030216 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:16.030305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:16.030644 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:16.530338 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:16.530413 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:16.530783 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:16.530841 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:17.030494 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:17.030561 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:17.030881 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:17.530218 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:17.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:17.530621 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:18.030226 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:18.030315 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:18.030655 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:18.530196 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:18.530278 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:18.530553 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:19.031062 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:19.031145 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:19.031472 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:19.031531 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:19.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:19.530282 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:19.530610 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:20.030544 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:20.030624 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:20.030925 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:20.530202 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:20.530275 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:20.530644 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:21.030361 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:21.030463 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:21.030812 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:21.530486 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:21.530558 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:21.530871 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:21.530921 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:22.030256 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:22.030338 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:22.030680 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:22.530233 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:22.530313 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:22.530661 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:23.030227 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:23.030300 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:23.030564 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:23.530250 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:23.530342 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:23.530762 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:24.030398 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:24.030590 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:24.031036 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:24.031106 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:24.530825 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:24.530892 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:24.531165 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:25.031145 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:25.031231 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:25.031590 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:25.530287 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:25.530385 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:25.530787 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:26.030475 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:26.030551 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:26.030935 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:26.530656 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:26.530743 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:26.531109 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:26.531165 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:27.030982 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:27.031070 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:27.031412 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:27.530790 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:27.530858 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:27.531125 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:28.030995 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:28.031074 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:28.031452 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:28.530173 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:28.530254 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:28.530600 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:29.030339 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:29.030432 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:29.030724 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:29.030767 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:29.530501 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:29.530583 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:29.530943 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:30.030872 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:30.030956 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:30.031277 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:30.531026 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:30.531096 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:30.531388 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:31.031173 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:31.031248 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:31.031592 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:31.031655 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:31.530219 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:31.530290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:31.530619 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:32.030304 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:32.030380 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:32.030665 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:32.530207 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:32.530286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:32.530634 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:33.030347 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:33.030429 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:33.030767 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:33.530186 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:33.530259 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:33.530528 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:33.530569 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:34.030234 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:34.030320 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:34.030648 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:34.530224 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:34.530303 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:34.530668 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:35.030514 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:35.030598 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:35.030879 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:35.530539 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:35.530621 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:35.530944 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:35.530999 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:36.030792 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:36.030868 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:36.031197 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:36.530952 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:36.531027 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:36.531293 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:37.031128 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:37.031222 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:37.031596 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:37.530210 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:37.530284 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:37.530618 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:38.030192 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:38.030278 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:38.030552 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:38.030630 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:38.530218 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:38.530293 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:38.530633 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:39.030659 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:39.030738 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:39.031056 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:39.530839 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:39.530914 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:39.531181 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:40.031117 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:40.031198 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:40.031558 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:40.031631 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:40.530210 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:40.530292 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:40.530625 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:41.030178 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:41.030256 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:41.030535 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:41.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:41.530290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:41.530641 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:42.030366 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:42.030455 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:42.030891 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:42.530441 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:42.530519 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:42.530792 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:42.530833 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:43.030494 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:43.030585 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:43.030904 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:43.530228 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:43.530306 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:43.530630 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:44.030307 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:44.030381 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:44.030707 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:44.530227 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:44.530296 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:44.530588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:45.031339 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:45.031427 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:45.031745 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:45.031809 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:45.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:45.530276 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:45.530545 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:46.030239 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:46.030321 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:46.030687 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:46.530366 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:46.530439 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:46.530762 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:47.030423 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:47.030519 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:47.030809 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:47.530502 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:47.530580 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:47.530914 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:47.530970 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:48.030658 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:48.030731 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:48.031047 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:48.530357 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:48.530426 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:48.530764 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:49.030805 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:49.030882 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:49.031204 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:49.530982 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:49.531053 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:49.531372 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:49.531427 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:50.031030 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:50.031115 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:50.031532 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:50.530205 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:50.530299 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:50.530623 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:51.030207 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:51.030286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:51.030614 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:51.530323 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:51.530401 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:51.530711 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:52.030254 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:52.030329 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:52.030627 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:52.030687 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:52.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:52.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:52.530659 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:53.030195 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:53.030278 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:53.030640 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:53.530268 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:53.530359 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:53.530765 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:54.030501 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:54.030589 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:54.030906 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:54.030956 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:54.530370 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:54.530458 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:54.530775 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:55.030784 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:55.030866 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:55.031248 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:55.531028 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:55.531111 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:55.531412 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:56.031149 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:56.031232 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:56.031533 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:56.031587 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:56.530201 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:56.530282 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:56.530600 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:57.030260 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:57.030338 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:57.030673 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:57.530209 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:57.530285 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:57.530565 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:58.030268 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:58.030347 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:58.030688 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:58.530418 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:58.530493 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:58.530876 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:58.530935 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:59.030221 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:59.030349 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:59.030710 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:59.530411 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:59.530486 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:59.530845 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:46:00.030785 2968376 type.go:168] "Request Body" body=""
	I1217 10:46:00.030868 2968376 node_ready.go:38] duration metric: took 6m0.00085226s for node "functional-232588" to be "Ready" ...
	I1217 10:46:00.039967 2968376 out.go:203] 
	W1217 10:46:00.043066 2968376 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 10:46:00.043095 2968376 out.go:285] * 
	* 
	W1217 10:46:00.047185 2968376 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 10:46:00.056487 2968376 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-232588 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m5.858826158s for "functional-232588" cluster.
I1217 10:46:00.703135 2924574 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-232588
helpers_test.go:244: (dbg) docker inspect functional-232588:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55",
	        "Created": "2025-12-17T10:31:38.417629873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2962990,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T10:31:38.484538313Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/hostname",
	        "HostsPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/hosts",
	        "LogPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55-json.log",
	        "Name": "/functional-232588",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-232588:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-232588",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55",
	                "LowerDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813-init/diff:/var/lib/docker/overlay2/aa1c3cb837db05afa9c265c464cc269fa9c11658f422c1c8858e1287ac952f12/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-232588",
	                "Source": "/var/lib/docker/volumes/functional-232588/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-232588",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-232588",
	                "name.minikube.sigs.k8s.io": "functional-232588",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cf91fdea6bf1c59282af014fad74b29d2456698ebca9b6be8c9685054b7d7df4",
	            "SandboxKey": "/var/run/docker/netns/cf91fdea6bf1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35733"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35734"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35737"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35735"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35736"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-232588": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:06:f9:5f:98:3e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf7a63e100f8f97f0cb760b53960a5eaeb1f5054bead79442486fc1d51c01ab7",
	                    "EndpointID": "a3f4d6de946fb68269c7790dce129934f895a840ec5cebbe87fc0d49cb575c44",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-232588",
	                        "f67a3fa8da99"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-232588 -n functional-232588
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232588 -n functional-232588: exit status 2 (371.95591ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-232588 logs -n 25: (1.007852242s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-626013 ssh sudo cat /etc/ssl/certs/29245742.pem                                                                                                      │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image ls                                                                                                                                      │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ ssh            │ functional-626013 ssh sudo cat /usr/share/ca-certificates/29245742.pem                                                                                          │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image load --daemon kicbase/echo-server:functional-626013 --alsologtostderr                                                                   │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ ssh            │ functional-626013 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image ls                                                                                                                                      │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image save kicbase/echo-server:functional-626013 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image rm kicbase/echo-server:functional-626013 --alsologtostderr                                                                              │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image ls                                                                                                                                      │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ update-context │ functional-626013 update-context --alsologtostderr -v=2                                                                                                         │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ update-context │ functional-626013 update-context --alsologtostderr -v=2                                                                                                         │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ update-context │ functional-626013 update-context --alsologtostderr -v=2                                                                                                         │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image ls                                                                                                                                      │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image save --daemon kicbase/echo-server:functional-626013 --alsologtostderr                                                                   │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image ls --format short --alsologtostderr                                                                                                     │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image ls --format yaml --alsologtostderr                                                                                                      │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ ssh            │ functional-626013 ssh pgrep buildkitd                                                                                                                           │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │                     │
	│ image          │ functional-626013 image ls --format json --alsologtostderr                                                                                                      │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image ls --format table --alsologtostderr                                                                                                     │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image build -t localhost/my-image:functional-626013 testdata/build --alsologtostderr                                                          │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image ls                                                                                                                                      │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ delete         │ -p functional-626013                                                                                                                                            │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ start          │ -p functional-232588 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1           │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │                     │
	│ start          │ -p functional-232588 --alsologtostderr -v=8                                                                                                                     │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:39 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 10:39:54
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 10:39:54.887492 2968376 out.go:360] Setting OutFile to fd 1 ...
	I1217 10:39:54.887669 2968376 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:39:54.887679 2968376 out.go:374] Setting ErrFile to fd 2...
	I1217 10:39:54.887684 2968376 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:39:54.887953 2968376 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 10:39:54.888377 2968376 out.go:368] Setting JSON to false
	I1217 10:39:54.889321 2968376 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":58945,"bootTime":1765909050,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 10:39:54.889394 2968376 start.go:143] virtualization:  
	I1217 10:39:54.892820 2968376 out.go:179] * [functional-232588] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 10:39:54.896642 2968376 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 10:39:54.896710 2968376 notify.go:221] Checking for updates...
	I1217 10:39:54.900325 2968376 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 10:39:54.903432 2968376 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:39:54.906306 2968376 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 10:39:54.909105 2968376 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 10:39:54.911889 2968376 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 10:39:54.915217 2968376 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 10:39:54.915331 2968376 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 10:39:54.937972 2968376 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 10:39:54.938091 2968376 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:39:55.000760 2968376 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 10:39:54.991784263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:39:55.000879 2968376 docker.go:319] overlay module found
	I1217 10:39:55.005745 2968376 out.go:179] * Using the docker driver based on existing profile
	I1217 10:39:55.010762 2968376 start.go:309] selected driver: docker
	I1217 10:39:55.010794 2968376 start.go:927] validating driver "docker" against &{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:39:55.010914 2968376 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 10:39:55.011044 2968376 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:39:55.065164 2968376 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 10:39:55.056463493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:39:55.065569 2968376 cni.go:84] Creating CNI manager for ""
	I1217 10:39:55.065633 2968376 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 10:39:55.065694 2968376 start.go:353] cluster config:
	{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:39:55.070664 2968376 out.go:179] * Starting "functional-232588" primary control-plane node in "functional-232588" cluster
	I1217 10:39:55.073373 2968376 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 10:39:55.076286 2968376 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 10:39:55.079282 2968376 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 10:39:55.079315 2968376 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 10:39:55.079350 2968376 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1217 10:39:55.079358 2968376 cache.go:65] Caching tarball of preloaded images
	I1217 10:39:55.079437 2968376 preload.go:238] Found /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 10:39:55.079447 2968376 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1217 10:39:55.079550 2968376 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/config.json ...
	I1217 10:39:55.100219 2968376 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 10:39:55.100251 2968376 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 10:39:55.100265 2968376 cache.go:243] Successfully downloaded all kic artifacts
	I1217 10:39:55.100297 2968376 start.go:360] acquireMachinesLock for functional-232588: {Name:mkb7828f32963a62377c74058da795e63eb677f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 10:39:55.100355 2968376 start.go:364] duration metric: took 36.061µs to acquireMachinesLock for "functional-232588"
	I1217 10:39:55.100378 2968376 start.go:96] Skipping create...Using existing machine configuration
	I1217 10:39:55.100389 2968376 fix.go:54] fixHost starting: 
	I1217 10:39:55.100690 2968376 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:39:55.118322 2968376 fix.go:112] recreateIfNeeded on functional-232588: state=Running err=<nil>
	W1217 10:39:55.118352 2968376 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 10:39:55.121614 2968376 out.go:252] * Updating the running docker "functional-232588" container ...
	I1217 10:39:55.121666 2968376 machine.go:94] provisionDockerMachine start ...
	I1217 10:39:55.121762 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:55.140448 2968376 main.go:143] libmachine: Using SSH client type: native
	I1217 10:39:55.140568 2968376 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:39:55.140576 2968376 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 10:39:55.272992 2968376 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232588
	
	I1217 10:39:55.273058 2968376 ubuntu.go:182] provisioning hostname "functional-232588"
	I1217 10:39:55.273155 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:55.294100 2968376 main.go:143] libmachine: Using SSH client type: native
	I1217 10:39:55.294200 2968376 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:39:55.294209 2968376 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-232588 && echo "functional-232588" | sudo tee /etc/hostname
	I1217 10:39:55.433566 2968376 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232588
	
	I1217 10:39:55.433651 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:55.452012 2968376 main.go:143] libmachine: Using SSH client type: native
	I1217 10:39:55.452130 2968376 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:39:55.452152 2968376 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-232588' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-232588/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-232588' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 10:39:55.584734 2968376 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 10:39:55.584801 2968376 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 10:39:55.584835 2968376 ubuntu.go:190] setting up certificates
	I1217 10:39:55.584846 2968376 provision.go:84] configureAuth start
	I1217 10:39:55.584917 2968376 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232588
	I1217 10:39:55.602169 2968376 provision.go:143] copyHostCerts
	I1217 10:39:55.602226 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 10:39:55.602261 2968376 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 10:39:55.602273 2968376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 10:39:55.602347 2968376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 10:39:55.602482 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 10:39:55.602507 2968376 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 10:39:55.602512 2968376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 10:39:55.602540 2968376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 10:39:55.602588 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 10:39:55.602609 2968376 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 10:39:55.602618 2968376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 10:39:55.602651 2968376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 10:39:55.602701 2968376 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.functional-232588 san=[127.0.0.1 192.168.49.2 functional-232588 localhost minikube]
	I1217 10:39:55.859794 2968376 provision.go:177] copyRemoteCerts
	I1217 10:39:55.859877 2968376 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 10:39:55.859950 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:55.877144 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:55.974879 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 10:39:55.974962 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 10:39:55.992960 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 10:39:55.993024 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 10:39:56.017007 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 10:39:56.017075 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 10:39:56.039037 2968376 provision.go:87] duration metric: took 454.177473ms to configureAuth
	I1217 10:39:56.039062 2968376 ubuntu.go:206] setting minikube options for container-runtime
	I1217 10:39:56.039248 2968376 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 10:39:56.039255 2968376 machine.go:97] duration metric: took 917.583269ms to provisionDockerMachine
	I1217 10:39:56.039263 2968376 start.go:293] postStartSetup for "functional-232588" (driver="docker")
	I1217 10:39:56.039274 2968376 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 10:39:56.039330 2968376 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 10:39:56.039374 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:56.064674 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:56.164379 2968376 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 10:39:56.167903 2968376 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1217 10:39:56.167924 2968376 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1217 10:39:56.167929 2968376 command_runner.go:130] > VERSION_ID="12"
	I1217 10:39:56.167934 2968376 command_runner.go:130] > VERSION="12 (bookworm)"
	I1217 10:39:56.167939 2968376 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1217 10:39:56.167943 2968376 command_runner.go:130] > ID=debian
	I1217 10:39:56.167947 2968376 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1217 10:39:56.167952 2968376 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1217 10:39:56.167958 2968376 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1217 10:39:56.168026 2968376 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 10:39:56.168043 2968376 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 10:39:56.168054 2968376 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 10:39:56.168116 2968376 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 10:39:56.168193 2968376 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 10:39:56.168199 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> /etc/ssl/certs/29245742.pem
	I1217 10:39:56.168276 2968376 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts -> hosts in /etc/test/nested/copy/2924574
	I1217 10:39:56.168280 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts -> /etc/test/nested/copy/2924574/hosts
	I1217 10:39:56.168325 2968376 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2924574
	I1217 10:39:56.175992 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 10:39:56.194065 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts --> /etc/test/nested/copy/2924574/hosts (40 bytes)
	I1217 10:39:56.211618 2968376 start.go:296] duration metric: took 172.340234ms for postStartSetup
	I1217 10:39:56.211696 2968376 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 10:39:56.211740 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:56.229142 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:56.321408 2968376 command_runner.go:130] > 18%
	I1217 10:39:56.321497 2968376 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 10:39:56.325775 2968376 command_runner.go:130] > 160G
	I1217 10:39:56.326243 2968376 fix.go:56] duration metric: took 1.225850623s for fixHost
	I1217 10:39:56.326261 2968376 start.go:83] releasing machines lock for "functional-232588", held for 1.22589425s
	I1217 10:39:56.326382 2968376 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232588
	I1217 10:39:56.351440 2968376 ssh_runner.go:195] Run: cat /version.json
	I1217 10:39:56.351467 2968376 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 10:39:56.351509 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:56.351532 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:56.377953 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:56.378286 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:56.472298 2968376 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1217 10:39:56.558575 2968376 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1217 10:39:56.561329 2968376 ssh_runner.go:195] Run: systemctl --version
	I1217 10:39:56.567378 2968376 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1217 10:39:56.567418 2968376 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1217 10:39:56.567866 2968376 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1217 10:39:56.572178 2968376 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1217 10:39:56.572242 2968376 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 10:39:56.572327 2968376 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 10:39:56.580077 2968376 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 10:39:56.580102 2968376 start.go:496] detecting cgroup driver to use...
	I1217 10:39:56.580153 2968376 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 10:39:56.580207 2968376 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 10:39:56.595473 2968376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 10:39:56.608619 2968376 docker.go:218] disabling cri-docker service (if available) ...
	I1217 10:39:56.608683 2968376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 10:39:56.624626 2968376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 10:39:56.639198 2968376 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 10:39:56.750544 2968376 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 10:39:56.881240 2968376 docker.go:234] disabling docker service ...
	I1217 10:39:56.881321 2968376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 10:39:56.896533 2968376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 10:39:56.909686 2968376 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 10:39:57.029179 2968376 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 10:39:57.147650 2968376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 10:39:57.160165 2968376 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 10:39:57.172821 2968376 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1217 10:39:57.174291 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 10:39:57.183184 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 10:39:57.192049 2968376 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 10:39:57.192173 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 10:39:57.201301 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 10:39:57.210430 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 10:39:57.219288 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 10:39:57.228051 2968376 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 10:39:57.235994 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 10:39:57.245724 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 10:39:57.254416 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 10:39:57.263062 2968376 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 10:39:57.269668 2968376 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1217 10:39:57.270584 2968376 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 10:39:57.278345 2968376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 10:39:57.386138 2968376 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 10:39:57.532674 2968376 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 10:39:57.532750 2968376 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 10:39:57.536608 2968376 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1217 10:39:57.536637 2968376 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1217 10:39:57.536644 2968376 command_runner.go:130] > Device: 0,72	Inode: 1613        Links: 1
	I1217 10:39:57.536652 2968376 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 10:39:57.536659 2968376 command_runner.go:130] > Access: 2025-12-17 10:39:57.469772687 +0000
	I1217 10:39:57.536664 2968376 command_runner.go:130] > Modify: 2025-12-17 10:39:57.469772687 +0000
	I1217 10:39:57.536669 2968376 command_runner.go:130] > Change: 2025-12-17 10:39:57.469772687 +0000
	I1217 10:39:57.536673 2968376 command_runner.go:130] >  Birth: -
	I1217 10:39:57.537168 2968376 start.go:564] Will wait 60s for crictl version
	I1217 10:39:57.537224 2968376 ssh_runner.go:195] Run: which crictl
	I1217 10:39:57.540827 2968376 command_runner.go:130] > /usr/local/bin/crictl
	I1217 10:39:57.541302 2968376 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 10:39:57.573267 2968376 command_runner.go:130] > Version:  0.1.0
	I1217 10:39:57.573463 2968376 command_runner.go:130] > RuntimeName:  containerd
	I1217 10:39:57.573480 2968376 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1217 10:39:57.573656 2968376 command_runner.go:130] > RuntimeApiVersion:  v1
	I1217 10:39:57.575908 2968376 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 10:39:57.575979 2968376 ssh_runner.go:195] Run: containerd --version
	I1217 10:39:57.593702 2968376 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1217 10:39:57.595828 2968376 ssh_runner.go:195] Run: containerd --version
	I1217 10:39:57.613025 2968376 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1217 10:39:57.620756 2968376 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1217 10:39:57.623690 2968376 cli_runner.go:164] Run: docker network inspect functional-232588 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 10:39:57.639560 2968376 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 10:39:57.643332 2968376 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1217 10:39:57.643691 2968376 kubeadm.go:884] updating cluster {Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 10:39:57.643808 2968376 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 10:39:57.643873 2968376 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 10:39:57.668138 2968376 command_runner.go:130] > {
	I1217 10:39:57.668155 2968376 command_runner.go:130] >   "images":  [
	I1217 10:39:57.668160 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668169 2968376 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 10:39:57.668174 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668179 2968376 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 10:39:57.668183 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668187 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668196 2968376 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1217 10:39:57.668199 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668204 2968376 command_runner.go:130] >       "size":  "40636774",
	I1217 10:39:57.668208 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668212 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668215 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668218 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668226 2968376 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 10:39:57.668231 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668236 2968376 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 10:39:57.668239 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668244 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668252 2968376 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 10:39:57.668260 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668264 2968376 command_runner.go:130] >       "size":  "8034419",
	I1217 10:39:57.668267 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668271 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668274 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668278 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668284 2968376 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 10:39:57.668288 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668293 2968376 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 10:39:57.668296 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668303 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668311 2968376 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1217 10:39:57.668314 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668319 2968376 command_runner.go:130] >       "size":  "21168808",
	I1217 10:39:57.668323 2968376 command_runner.go:130] >       "username":  "nonroot",
	I1217 10:39:57.668327 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668330 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668333 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668340 2968376 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1217 10:39:57.668344 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668348 2968376 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1217 10:39:57.668351 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668355 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668363 2968376 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1217 10:39:57.668366 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668370 2968376 command_runner.go:130] >       "size":  "21749640",
	I1217 10:39:57.668375 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.668379 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.668382 2968376 command_runner.go:130] >       },
	I1217 10:39:57.668386 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668390 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668393 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668396 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668405 2968376 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1217 10:39:57.668409 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668433 2968376 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1217 10:39:57.668438 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668442 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668450 2968376 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1217 10:39:57.668454 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668458 2968376 command_runner.go:130] >       "size":  "24692223",
	I1217 10:39:57.668461 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.668470 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.668478 2968376 command_runner.go:130] >       },
	I1217 10:39:57.668482 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668485 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668489 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668492 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668498 2968376 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1217 10:39:57.668503 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668509 2968376 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1217 10:39:57.668512 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668517 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668530 2968376 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1217 10:39:57.668537 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668542 2968376 command_runner.go:130] >       "size":  "20672157",
	I1217 10:39:57.668545 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.668549 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.668557 2968376 command_runner.go:130] >       },
	I1217 10:39:57.668562 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668576 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668580 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668583 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668589 2968376 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1217 10:39:57.668593 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668598 2968376 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1217 10:39:57.668608 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668614 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668622 2968376 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1217 10:39:57.668629 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668633 2968376 command_runner.go:130] >       "size":  "22432301",
	I1217 10:39:57.668637 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668641 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668645 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668648 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668655 2968376 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1217 10:39:57.668662 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668668 2968376 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1217 10:39:57.668672 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668678 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668689 2968376 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1217 10:39:57.668692 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668696 2968376 command_runner.go:130] >       "size":  "15405535",
	I1217 10:39:57.668702 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.668706 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.668719 2968376 command_runner.go:130] >       },
	I1217 10:39:57.668723 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668726 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668730 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668734 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668740 2968376 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 10:39:57.668748 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668753 2968376 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 10:39:57.668756 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668760 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668767 2968376 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1217 10:39:57.668773 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668777 2968376 command_runner.go:130] >       "size":  "267939",
	I1217 10:39:57.668781 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.668792 2968376 command_runner.go:130] >         "value":  "65535"
	I1217 10:39:57.668799 2968376 command_runner.go:130] >       },
	I1217 10:39:57.668803 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668807 2968376 command_runner.go:130] >       "pinned":  true
	I1217 10:39:57.668810 2968376 command_runner.go:130] >     }
	I1217 10:39:57.668813 2968376 command_runner.go:130] >   ]
	I1217 10:39:57.668816 2968376 command_runner.go:130] > }
	I1217 10:39:57.671107 2968376 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 10:39:57.671128 2968376 containerd.go:534] Images already preloaded, skipping extraction
	I1217 10:39:57.671185 2968376 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 10:39:57.697059 2968376 command_runner.go:130] > {
	I1217 10:39:57.697078 2968376 command_runner.go:130] >   "images":  [
	I1217 10:39:57.697083 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697093 2968376 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 10:39:57.697108 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697114 2968376 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 10:39:57.697118 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697122 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697131 2968376 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1217 10:39:57.697142 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697147 2968376 command_runner.go:130] >       "size":  "40636774",
	I1217 10:39:57.697155 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697159 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697162 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697166 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697175 2968376 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 10:39:57.697180 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697185 2968376 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 10:39:57.697188 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697192 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697202 2968376 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 10:39:57.697205 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697209 2968376 command_runner.go:130] >       "size":  "8034419",
	I1217 10:39:57.697213 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697216 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697219 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697222 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697229 2968376 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 10:39:57.697233 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697238 2968376 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 10:39:57.697242 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697249 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697256 2968376 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1217 10:39:57.697260 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697264 2968376 command_runner.go:130] >       "size":  "21168808",
	I1217 10:39:57.697268 2968376 command_runner.go:130] >       "username":  "nonroot",
	I1217 10:39:57.697272 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697275 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697278 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697284 2968376 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1217 10:39:57.697288 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697293 2968376 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1217 10:39:57.697296 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697300 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697310 2968376 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1217 10:39:57.697314 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697318 2968376 command_runner.go:130] >       "size":  "21749640",
	I1217 10:39:57.697323 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.697327 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.697330 2968376 command_runner.go:130] >       },
	I1217 10:39:57.697334 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697338 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697341 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697344 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697350 2968376 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1217 10:39:57.697354 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697359 2968376 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1217 10:39:57.697363 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697366 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697374 2968376 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1217 10:39:57.697377 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697381 2968376 command_runner.go:130] >       "size":  "24692223",
	I1217 10:39:57.697384 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.697393 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.697396 2968376 command_runner.go:130] >       },
	I1217 10:39:57.697400 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697403 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697406 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697409 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697416 2968376 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1217 10:39:57.697419 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697425 2968376 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1217 10:39:57.697428 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697432 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697440 2968376 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1217 10:39:57.697443 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697448 2968376 command_runner.go:130] >       "size":  "20672157",
	I1217 10:39:57.697460 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.697464 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.697467 2968376 command_runner.go:130] >       },
	I1217 10:39:57.697470 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697474 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697477 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697480 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697486 2968376 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1217 10:39:57.697490 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697495 2968376 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1217 10:39:57.697498 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697501 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697509 2968376 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1217 10:39:57.697512 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697515 2968376 command_runner.go:130] >       "size":  "22432301",
	I1217 10:39:57.697519 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697523 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697526 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697530 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697536 2968376 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1217 10:39:57.697540 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697545 2968376 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1217 10:39:57.697548 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697552 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697560 2968376 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1217 10:39:57.697563 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697567 2968376 command_runner.go:130] >       "size":  "15405535",
	I1217 10:39:57.697570 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.697574 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.697578 2968376 command_runner.go:130] >       },
	I1217 10:39:57.697581 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697585 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697588 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697594 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697600 2968376 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 10:39:57.697604 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697609 2968376 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 10:39:57.697612 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697615 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697622 2968376 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1217 10:39:57.697626 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697630 2968376 command_runner.go:130] >       "size":  "267939",
	I1217 10:39:57.697633 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.697637 2968376 command_runner.go:130] >         "value":  "65535"
	I1217 10:39:57.697641 2968376 command_runner.go:130] >       },
	I1217 10:39:57.697645 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697649 2968376 command_runner.go:130] >       "pinned":  true
	I1217 10:39:57.697652 2968376 command_runner.go:130] >     }
	I1217 10:39:57.697655 2968376 command_runner.go:130] >   ]
	I1217 10:39:57.697657 2968376 command_runner.go:130] > }
	I1217 10:39:57.699989 2968376 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 10:39:57.700059 2968376 cache_images.go:86] Images are preloaded, skipping loading
	I1217 10:39:57.700081 2968376 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1217 10:39:57.700225 2968376 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-232588 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 10:39:57.700311 2968376 ssh_runner.go:195] Run: sudo crictl info
	I1217 10:39:57.722782 2968376 command_runner.go:130] > {
	I1217 10:39:57.722800 2968376 command_runner.go:130] >   "cniconfig": {
	I1217 10:39:57.722805 2968376 command_runner.go:130] >     "Networks": [
	I1217 10:39:57.722813 2968376 command_runner.go:130] >       {
	I1217 10:39:57.722822 2968376 command_runner.go:130] >         "Config": {
	I1217 10:39:57.722827 2968376 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1217 10:39:57.722835 2968376 command_runner.go:130] >           "Name": "cni-loopback",
	I1217 10:39:57.722839 2968376 command_runner.go:130] >           "Plugins": [
	I1217 10:39:57.722843 2968376 command_runner.go:130] >             {
	I1217 10:39:57.722847 2968376 command_runner.go:130] >               "Network": {
	I1217 10:39:57.722851 2968376 command_runner.go:130] >                 "ipam": {},
	I1217 10:39:57.722856 2968376 command_runner.go:130] >                 "type": "loopback"
	I1217 10:39:57.722860 2968376 command_runner.go:130] >               },
	I1217 10:39:57.722866 2968376 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1217 10:39:57.722869 2968376 command_runner.go:130] >             }
	I1217 10:39:57.722873 2968376 command_runner.go:130] >           ],
	I1217 10:39:57.722882 2968376 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1217 10:39:57.722886 2968376 command_runner.go:130] >         },
	I1217 10:39:57.722893 2968376 command_runner.go:130] >         "IFName": "lo"
	I1217 10:39:57.722896 2968376 command_runner.go:130] >       }
	I1217 10:39:57.722899 2968376 command_runner.go:130] >     ],
	I1217 10:39:57.722908 2968376 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1217 10:39:57.722912 2968376 command_runner.go:130] >     "PluginDirs": [
	I1217 10:39:57.722915 2968376 command_runner.go:130] >       "/opt/cni/bin"
	I1217 10:39:57.722919 2968376 command_runner.go:130] >     ],
	I1217 10:39:57.722923 2968376 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1217 10:39:57.722926 2968376 command_runner.go:130] >     "Prefix": "eth"
	I1217 10:39:57.722930 2968376 command_runner.go:130] >   },
	I1217 10:39:57.722933 2968376 command_runner.go:130] >   "config": {
	I1217 10:39:57.722936 2968376 command_runner.go:130] >     "cdiSpecDirs": [
	I1217 10:39:57.722940 2968376 command_runner.go:130] >       "/etc/cdi",
	I1217 10:39:57.722944 2968376 command_runner.go:130] >       "/var/run/cdi"
	I1217 10:39:57.722948 2968376 command_runner.go:130] >     ],
	I1217 10:39:57.722952 2968376 command_runner.go:130] >     "cni": {
	I1217 10:39:57.722955 2968376 command_runner.go:130] >       "binDir": "",
	I1217 10:39:57.722959 2968376 command_runner.go:130] >       "binDirs": [
	I1217 10:39:57.722962 2968376 command_runner.go:130] >         "/opt/cni/bin"
	I1217 10:39:57.722965 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.722969 2968376 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1217 10:39:57.722973 2968376 command_runner.go:130] >       "confTemplate": "",
	I1217 10:39:57.722983 2968376 command_runner.go:130] >       "ipPref": "",
	I1217 10:39:57.722986 2968376 command_runner.go:130] >       "maxConfNum": 1,
	I1217 10:39:57.722991 2968376 command_runner.go:130] >       "setupSerially": false,
	I1217 10:39:57.722995 2968376 command_runner.go:130] >       "useInternalLoopback": false
	I1217 10:39:57.722998 2968376 command_runner.go:130] >     },
	I1217 10:39:57.723004 2968376 command_runner.go:130] >     "containerd": {
	I1217 10:39:57.723008 2968376 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1217 10:39:57.723013 2968376 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1217 10:39:57.723017 2968376 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1217 10:39:57.723021 2968376 command_runner.go:130] >       "runtimes": {
	I1217 10:39:57.723024 2968376 command_runner.go:130] >         "runc": {
	I1217 10:39:57.723029 2968376 command_runner.go:130] >           "ContainerAnnotations": null,
	I1217 10:39:57.723033 2968376 command_runner.go:130] >           "PodAnnotations": null,
	I1217 10:39:57.723038 2968376 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1217 10:39:57.723046 2968376 command_runner.go:130] >           "cgroupWritable": false,
	I1217 10:39:57.723050 2968376 command_runner.go:130] >           "cniConfDir": "",
	I1217 10:39:57.723054 2968376 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1217 10:39:57.723058 2968376 command_runner.go:130] >           "io_type": "",
	I1217 10:39:57.723061 2968376 command_runner.go:130] >           "options": {
	I1217 10:39:57.723065 2968376 command_runner.go:130] >             "BinaryName": "",
	I1217 10:39:57.723069 2968376 command_runner.go:130] >             "CriuImagePath": "",
	I1217 10:39:57.723074 2968376 command_runner.go:130] >             "CriuWorkPath": "",
	I1217 10:39:57.723077 2968376 command_runner.go:130] >             "IoGid": 0,
	I1217 10:39:57.723081 2968376 command_runner.go:130] >             "IoUid": 0,
	I1217 10:39:57.723085 2968376 command_runner.go:130] >             "NoNewKeyring": false,
	I1217 10:39:57.723089 2968376 command_runner.go:130] >             "Root": "",
	I1217 10:39:57.723092 2968376 command_runner.go:130] >             "ShimCgroup": "",
	I1217 10:39:57.723096 2968376 command_runner.go:130] >             "SystemdCgroup": false
	I1217 10:39:57.723100 2968376 command_runner.go:130] >           },
	I1217 10:39:57.723105 2968376 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1217 10:39:57.723111 2968376 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1217 10:39:57.723115 2968376 command_runner.go:130] >           "runtimePath": "",
	I1217 10:39:57.723120 2968376 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1217 10:39:57.723124 2968376 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1217 10:39:57.723128 2968376 command_runner.go:130] >           "snapshotter": ""
	I1217 10:39:57.723131 2968376 command_runner.go:130] >         }
	I1217 10:39:57.723134 2968376 command_runner.go:130] >       }
	I1217 10:39:57.723136 2968376 command_runner.go:130] >     },
	I1217 10:39:57.723146 2968376 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1217 10:39:57.723151 2968376 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1217 10:39:57.723156 2968376 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1217 10:39:57.723161 2968376 command_runner.go:130] >     "disableApparmor": false,
	I1217 10:39:57.723166 2968376 command_runner.go:130] >     "disableHugetlbController": true,
	I1217 10:39:57.723170 2968376 command_runner.go:130] >     "disableProcMount": false,
	I1217 10:39:57.723174 2968376 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1217 10:39:57.723177 2968376 command_runner.go:130] >     "enableCDI": true,
	I1217 10:39:57.723181 2968376 command_runner.go:130] >     "enableSelinux": false,
	I1217 10:39:57.723188 2968376 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1217 10:39:57.723195 2968376 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1217 10:39:57.723200 2968376 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1217 10:39:57.723204 2968376 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1217 10:39:57.723208 2968376 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1217 10:39:57.723212 2968376 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1217 10:39:57.723216 2968376 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1217 10:39:57.723222 2968376 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1217 10:39:57.723226 2968376 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1217 10:39:57.723231 2968376 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1217 10:39:57.723236 2968376 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1217 10:39:57.723241 2968376 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1217 10:39:57.723243 2968376 command_runner.go:130] >   },
	I1217 10:39:57.723247 2968376 command_runner.go:130] >   "features": {
	I1217 10:39:57.723251 2968376 command_runner.go:130] >     "supplemental_groups_policy": true
	I1217 10:39:57.723254 2968376 command_runner.go:130] >   },
	I1217 10:39:57.723257 2968376 command_runner.go:130] >   "golang": "go1.24.9",
	I1217 10:39:57.723267 2968376 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1217 10:39:57.723277 2968376 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1217 10:39:57.723281 2968376 command_runner.go:130] >   "runtimeHandlers": [
	I1217 10:39:57.723283 2968376 command_runner.go:130] >     {
	I1217 10:39:57.723287 2968376 command_runner.go:130] >       "features": {
	I1217 10:39:57.723291 2968376 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1217 10:39:57.723297 2968376 command_runner.go:130] >         "user_namespaces": true
	I1217 10:39:57.723299 2968376 command_runner.go:130] >       }
	I1217 10:39:57.723302 2968376 command_runner.go:130] >     },
	I1217 10:39:57.723305 2968376 command_runner.go:130] >     {
	I1217 10:39:57.723308 2968376 command_runner.go:130] >       "features": {
	I1217 10:39:57.723315 2968376 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1217 10:39:57.723319 2968376 command_runner.go:130] >         "user_namespaces": true
	I1217 10:39:57.723322 2968376 command_runner.go:130] >       },
	I1217 10:39:57.723326 2968376 command_runner.go:130] >       "name": "runc"
	I1217 10:39:57.723328 2968376 command_runner.go:130] >     }
	I1217 10:39:57.723335 2968376 command_runner.go:130] >   ],
	I1217 10:39:57.723338 2968376 command_runner.go:130] >   "status": {
	I1217 10:39:57.723342 2968376 command_runner.go:130] >     "conditions": [
	I1217 10:39:57.723345 2968376 command_runner.go:130] >       {
	I1217 10:39:57.723348 2968376 command_runner.go:130] >         "message": "",
	I1217 10:39:57.723352 2968376 command_runner.go:130] >         "reason": "",
	I1217 10:39:57.723356 2968376 command_runner.go:130] >         "status": true,
	I1217 10:39:57.723361 2968376 command_runner.go:130] >         "type": "RuntimeReady"
	I1217 10:39:57.723364 2968376 command_runner.go:130] >       },
	I1217 10:39:57.723367 2968376 command_runner.go:130] >       {
	I1217 10:39:57.723373 2968376 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1217 10:39:57.723378 2968376 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1217 10:39:57.723382 2968376 command_runner.go:130] >         "status": false,
	I1217 10:39:57.723386 2968376 command_runner.go:130] >         "type": "NetworkReady"
	I1217 10:39:57.723389 2968376 command_runner.go:130] >       },
	I1217 10:39:57.723391 2968376 command_runner.go:130] >       {
	I1217 10:39:57.723414 2968376 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1217 10:39:57.723421 2968376 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1217 10:39:57.723426 2968376 command_runner.go:130] >         "status": false,
	I1217 10:39:57.723432 2968376 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1217 10:39:57.723434 2968376 command_runner.go:130] >       }
	I1217 10:39:57.723437 2968376 command_runner.go:130] >     ]
	I1217 10:39:57.723440 2968376 command_runner.go:130] >   }
	I1217 10:39:57.723442 2968376 command_runner.go:130] > }
	I1217 10:39:57.726093 2968376 cni.go:84] Creating CNI manager for ""
	I1217 10:39:57.726119 2968376 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 10:39:57.726139 2968376 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 10:39:57.726166 2968376 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-232588 NodeName:functional-232588 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 10:39:57.726283 2968376 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-232588"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 10:39:57.726359 2968376 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 10:39:57.733320 2968376 command_runner.go:130] > kubeadm
	I1217 10:39:57.733342 2968376 command_runner.go:130] > kubectl
	I1217 10:39:57.733347 2968376 command_runner.go:130] > kubelet
	I1217 10:39:57.734253 2968376 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 10:39:57.734351 2968376 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 10:39:57.741900 2968376 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1217 10:39:57.754718 2968376 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 10:39:57.767131 2968376 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1217 10:39:57.780328 2968376 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 10:39:57.783968 2968376 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1217 10:39:57.784263 2968376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 10:39:57.891500 2968376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 10:39:58.252332 2968376 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588 for IP: 192.168.49.2
	I1217 10:39:58.252409 2968376 certs.go:195] generating shared ca certs ...
	I1217 10:39:58.252461 2968376 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:39:58.252670 2968376 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 10:39:58.252752 2968376 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 10:39:58.252788 2968376 certs.go:257] generating profile certs ...
	I1217 10:39:58.252943 2968376 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.key
	I1217 10:39:58.253053 2968376 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key.a39919a0
	I1217 10:39:58.253133 2968376 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key
	I1217 10:39:58.253172 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 10:39:58.253214 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 10:39:58.253260 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 10:39:58.253294 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 10:39:58.253341 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 10:39:58.253377 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 10:39:58.253421 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 10:39:58.253456 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 10:39:58.253577 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 10:39:58.253658 2968376 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 10:39:58.253688 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 10:39:58.253756 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 10:39:58.253819 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 10:39:58.253883 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 10:39:58.253975 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 10:39:58.254044 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.254093 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem -> /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.254126 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.254782 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 10:39:58.276977 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 10:39:58.300224 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 10:39:58.319429 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 10:39:58.338203 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 10:39:58.355898 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 10:39:58.373473 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 10:39:58.391528 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 10:39:58.408858 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 10:39:58.426819 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 10:39:58.444926 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 10:39:58.462979 2968376 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 10:39:58.476114 2968376 ssh_runner.go:195] Run: openssl version
	I1217 10:39:58.483093 2968376 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 10:39:58.483240 2968376 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.490661 2968376 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 10:39:58.498193 2968376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.502204 2968376 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.502289 2968376 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.502352 2968376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.543361 2968376 command_runner.go:130] > b5213941
	I1217 10:39:58.543894 2968376 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 10:39:58.551548 2968376 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.559110 2968376 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 10:39:58.567064 2968376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.570982 2968376 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.571071 2968376 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.571149 2968376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.611772 2968376 command_runner.go:130] > 51391683
	I1217 10:39:58.612217 2968376 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 10:39:58.619901 2968376 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.627496 2968376 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 10:39:58.635170 2968376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.639161 2968376 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.639286 2968376 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.639343 2968376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.679963 2968376 command_runner.go:130] > 3ec20f2e
	I1217 10:39:58.680491 2968376 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 10:39:58.687873 2968376 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 10:39:58.691452 2968376 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 10:39:58.691483 2968376 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1217 10:39:58.691491 2968376 command_runner.go:130] > Device: 259,1	Inode: 3648630     Links: 1
	I1217 10:39:58.691498 2968376 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 10:39:58.691503 2968376 command_runner.go:130] > Access: 2025-12-17 10:35:51.067485305 +0000
	I1217 10:39:58.691508 2968376 command_runner.go:130] > Modify: 2025-12-17 10:31:46.445154139 +0000
	I1217 10:39:58.691513 2968376 command_runner.go:130] > Change: 2025-12-17 10:31:46.445154139 +0000
	I1217 10:39:58.691519 2968376 command_runner.go:130] >  Birth: 2025-12-17 10:31:46.445154139 +0000
	I1217 10:39:58.691792 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 10:39:58.732576 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.733078 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 10:39:58.773416 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.773947 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 10:39:58.814511 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.815058 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 10:39:58.855809 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.856437 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 10:39:58.897493 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.897637 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 10:39:58.937941 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.938362 2968376 kubeadm.go:401] StartCluster: {Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:39:58.938478 2968376 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 10:39:58.938558 2968376 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 10:39:58.967095 2968376 cri.go:89] found id: ""
	I1217 10:39:58.967172 2968376 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 10:39:58.974207 2968376 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1217 10:39:58.974232 2968376 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1217 10:39:58.974239 2968376 command_runner.go:130] > /var/lib/minikube/etcd:
	I1217 10:39:58.975124 2968376 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 10:39:58.975142 2968376 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 10:39:58.975194 2968376 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 10:39:58.982722 2968376 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 10:39:58.983159 2968376 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-232588" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:39:58.983280 2968376 kubeconfig.go:62] /home/jenkins/minikube-integration/22182-2922712/kubeconfig needs updating (will repair): [kubeconfig missing "functional-232588" cluster setting kubeconfig missing "functional-232588" context setting]
	I1217 10:39:58.983551 2968376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:39:58.984002 2968376 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:39:58.984156 2968376 kapi.go:59] client config for functional-232588: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt", KeyFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.key", CAFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 10:39:58.984706 2968376 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 10:39:58.984730 2968376 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 10:39:58.984737 2968376 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 10:39:58.984745 2968376 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 10:39:58.984756 2968376 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 10:39:58.984794 2968376 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 10:39:58.985054 2968376 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 10:39:58.992764 2968376 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 10:39:58.992810 2968376 kubeadm.go:602] duration metric: took 17.660629ms to restartPrimaryControlPlane
	I1217 10:39:58.992820 2968376 kubeadm.go:403] duration metric: took 54.467316ms to StartCluster
	I1217 10:39:58.992834 2968376 settings.go:142] acquiring lock: {Name:mkbe14d68dd5ff3fa1749157c50305d115f5a24d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:39:58.992909 2968376 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:39:58.993526 2968376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:39:58.993746 2968376 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 10:39:58.994170 2968376 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 10:39:58.994219 2968376 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 10:39:58.994288 2968376 addons.go:70] Setting storage-provisioner=true in profile "functional-232588"
	I1217 10:39:58.994301 2968376 addons.go:239] Setting addon storage-provisioner=true in "functional-232588"
	I1217 10:39:58.994329 2968376 host.go:66] Checking if "functional-232588" exists ...
	I1217 10:39:58.994354 2968376 addons.go:70] Setting default-storageclass=true in profile "functional-232588"
	I1217 10:39:58.994416 2968376 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-232588"
	I1217 10:39:58.994775 2968376 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:39:58.994809 2968376 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:39:59.000060 2968376 out.go:179] * Verifying Kubernetes components...
	I1217 10:39:59.002988 2968376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 10:39:59.030107 2968376 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:39:59.030278 2968376 kapi.go:59] client config for functional-232588: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt", KeyFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.key", CAFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 10:39:59.030548 2968376 addons.go:239] Setting addon default-storageclass=true in "functional-232588"
	I1217 10:39:59.030583 2968376 host.go:66] Checking if "functional-232588" exists ...
	I1217 10:39:59.030999 2968376 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:39:59.046619 2968376 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 10:39:59.049547 2968376 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:39:59.049578 2968376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 10:39:59.049652 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:59.071122 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:59.078111 2968376 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 10:39:59.078138 2968376 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 10:39:59.078204 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:59.106268 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:59.210035 2968376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 10:39:59.247804 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:39:59.250104 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:00.029975 2968376 node_ready.go:35] waiting up to 6m0s for node "functional-232588" to be "Ready" ...
	I1217 10:40:00.030121 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:00.030183 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:00.030443 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.030485 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.030522 2968376 retry.go:31] will retry after 293.620925ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.030561 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.030575 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.030582 2968376 retry.go:31] will retry after 156.365506ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.030650 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:00.188354 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:00.324847 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:00.436532 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.436662 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.436836 2968376 retry.go:31] will retry after 279.814099ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.516954 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.518501 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.518555 2968376 retry.go:31] will retry after 262.10287ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.531577 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:00.531724 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:00.533353 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 10:40:00.717812 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:00.781511 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:00.801403 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.801643 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.801671 2968376 retry.go:31] will retry after 799.844048ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.868602 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.868642 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.868698 2968376 retry.go:31] will retry after 554.70169ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:01.031171 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:01.031268 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:01.031636 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:01.424206 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:01.486829 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:01.486884 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:01.486903 2968376 retry.go:31] will retry after 534.910165ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:01.531036 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:01.531190 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:01.531514 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:01.601938 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:01.666361 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:01.666415 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:01.666435 2968376 retry.go:31] will retry after 494.63938ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:02.022963 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:02.030812 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:02.030945 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:02.031372 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:02.031439 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:02.093352 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:02.093469 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:02.093495 2968376 retry.go:31] will retry after 1.147395482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:02.161756 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:02.224785 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:02.224835 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:02.224873 2968376 retry.go:31] will retry after 722.380129ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:02.530243 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:02.530335 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:02.530682 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:02.948277 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:03.019220 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:03.023774 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:03.023820 2968376 retry.go:31] will retry after 1.527910453s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:03.031105 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:03.031182 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:03.031525 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:03.241898 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:03.304153 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:03.304205 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:03.304227 2968376 retry.go:31] will retry after 2.808262652s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:03.530353 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:03.530425 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:03.530767 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:04.030262 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:04.030340 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:04.030662 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:04.530190 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:04.530267 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:04.530613 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:04.530682 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:04.552783 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:04.614277 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:04.618634 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:04.618671 2968376 retry.go:31] will retry after 1.686088172s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:05.031243 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:05.031319 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:05.031611 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:05.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:05.530314 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:05.530636 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:06.030216 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:06.030295 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:06.030584 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:06.113005 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:06.174987 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:06.175028 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:06.175048 2968376 retry.go:31] will retry after 2.620064864s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:06.305352 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:06.366722 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:06.366771 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:06.366790 2968376 retry.go:31] will retry after 6.20410258s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:06.531098 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:06.531170 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:06.531510 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:06.531566 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:07.030285 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:07.030361 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:07.030703 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:07.530195 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:07.530269 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:07.530540 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:08.030245 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:08.030326 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:08.030668 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:08.530335 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:08.530413 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:08.530732 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:08.796304 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:08.853426 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:08.857034 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:08.857067 2968376 retry.go:31] will retry after 3.174722269s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:09.030586 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:09.030666 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:09.031008 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:09.031064 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:09.530804 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:09.530879 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:09.531204 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:10.031140 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:10.031218 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:10.031521 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:10.530229 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:10.530312 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:10.530588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:11.030272 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:11.030347 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:11.030674 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:11.530355 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:11.530450 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:11.530745 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:11.530788 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:12.030183 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:12.030259 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:12.030568 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:12.032754 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:12.104534 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:12.104594 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:12.104617 2968376 retry.go:31] will retry after 7.427014064s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:12.531116 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:12.531194 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:12.531510 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:12.571824 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:12.627783 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:12.631439 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:12.631473 2968376 retry.go:31] will retry after 5.673499761s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:13.031007 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:13.031079 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:13.031447 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:13.530133 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:13.530207 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:13.530473 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:14.030881 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:14.030963 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:14.031294 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:14.031348 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:14.531063 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:14.531139 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:14.531511 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:15.030415 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:15.030505 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:15.030865 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:15.530246 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:15.530327 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:15.530615 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:16.030257 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:16.030335 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:16.030683 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:16.530343 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:16.530412 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:16.530735 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:16.530792 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:17.030348 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:17.030438 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:17.030746 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:17.530427 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:17.530508 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:17.530854 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:18.031138 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:18.031239 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:18.031524 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:18.306153 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:18.363523 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:18.367149 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:18.367184 2968376 retry.go:31] will retry after 11.676089788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:18.530483 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:18.530628 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:18.530998 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:18.531054 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:19.031060 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:19.031138 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:19.031447 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:19.530144 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:19.530217 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:19.530501 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:19.532780 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:19.596086 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:19.596134 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:19.596153 2968376 retry.go:31] will retry after 6.09625298s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:20.031102 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:20.031251 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:20.031747 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:20.530743 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:20.530896 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:20.531474 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:20.531549 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:21.030954 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:21.031034 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:21.031324 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:21.531097 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:21.531170 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:21.531522 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:22.030145 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:22.030232 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:22.030617 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:22.530952 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:22.531023 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:22.531286 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:23.031049 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:23.031121 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:23.031488 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:23.031552 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:23.531151 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:23.531233 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:23.531594 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:24.030205 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:24.030271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:24.030618 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:24.530297 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:24.530374 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:24.530704 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:25.030556 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:25.030634 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:25.031013 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:25.530898 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:25.530990 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:25.531308 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:25.531351 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:25.692701 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:25.761074 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:25.761116 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:25.761134 2968376 retry.go:31] will retry after 8.308022173s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:26.030656 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:26.030736 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:26.031050 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:26.530816 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:26.530887 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:26.531227 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:27.030620 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:27.030689 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:27.030975 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:27.530810 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:27.530882 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:27.531225 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:28.031037 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:28.031121 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:28.031512 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:28.031588 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:28.530251 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:28.530319 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:28.530583 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:29.030525 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:29.030614 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:29.031053 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:29.530775 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:29.530846 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:29.531189 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:30.032544 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:30.032629 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:30.032970 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:30.033031 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:30.044190 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:30.141158 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:30.141207 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:30.141228 2968376 retry.go:31] will retry after 21.251088353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:30.530770 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:30.530848 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:30.531184 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:31.031023 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:31.031097 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:31.031429 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:31.530162 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:31.530338 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:31.530687 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:32.030318 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:32.030410 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:32.030863 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:32.530571 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:32.530648 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:32.531098 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:32.531174 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:33.030920 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:33.031010 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:33.031359 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:33.531147 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:33.531219 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:33.531570 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:34.030334 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:34.030418 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:34.030775 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:34.070045 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:34.128651 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:34.132259 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:34.132293 2968376 retry.go:31] will retry after 23.004999937s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:34.530392 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:34.530466 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:34.530735 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:35.030855 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:35.030938 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:35.031252 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:35.031308 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:35.530763 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:35.530834 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:35.531181 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:36.030980 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:36.031106 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:36.031458 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:36.530826 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:36.530905 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:36.531257 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:37.031180 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:37.031261 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:37.031662 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:37.031754 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:37.530206 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:37.530276 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:37.530649 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:38.030423 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:38.030503 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:38.030854 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:38.530587 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:38.530659 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:38.531005 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:39.030844 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:39.030924 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:39.031203 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:39.531010 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:39.531096 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:39.531446 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:39.531521 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:40.031145 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:40.031231 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:40.031658 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:40.530343 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:40.530420 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:40.530707 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:41.030992 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:41.031064 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:41.031409 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:41.531176 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:41.531252 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:41.531592 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:41.531649 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:42.030335 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:42.030418 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:42.030713 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:42.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:42.530293 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:42.530634 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:43.030224 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:43.030309 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:43.030694 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:43.530392 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:43.530468 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:43.530795 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:44.030259 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:44.030339 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:44.030666 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:44.030720 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:44.530388 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:44.530467 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:44.530803 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:45.030897 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:45.032736 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:45.034090 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 10:40:45.530857 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:45.530936 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:45.531262 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:46.031009 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:46.031081 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:46.031343 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:46.031380 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:46.531073 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:46.531152 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:46.531521 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:47.030170 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:47.030255 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:47.030602 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:47.530303 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:47.530374 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:47.530644 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:48.030323 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:48.030406 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:48.030744 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:48.530500 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:48.530605 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:48.530966 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:48.531023 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:49.030795 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:49.030871 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:49.031172 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:49.530860 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:49.530935 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:49.531267 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:50.031129 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:50.031208 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:50.031548 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:50.530224 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:50.530303 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:50.530574 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:51.030327 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:51.030401 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:51.030749 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:51.030806 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:51.393321 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:51.454332 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:51.458316 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:51.458350 2968376 retry.go:31] will retry after 15.302727777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:51.530571 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:51.530643 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:51.530966 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:52.030247 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:52.030332 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:52.030623 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:52.530219 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:52.530312 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:52.530691 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:53.030289 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:53.030364 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:53.030698 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:53.530380 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:53.530457 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:53.530780 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:53.530833 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:54.030549 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:54.030652 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:54.030947 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:54.530639 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:54.530716 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:54.531043 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:55.030934 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:55.031013 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:55.031455 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:55.531099 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:55.531193 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:55.531510 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:55.531578 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:56.030320 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:56.030398 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:56.030700 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:56.530201 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:56.530273 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:56.530535 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:57.030303 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:57.030373 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:57.030719 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:57.138000 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:57.193212 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:57.197444 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:57.197478 2968376 retry.go:31] will retry after 20.170499035s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:57.530886 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:57.530963 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:57.531316 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:58.031030 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:58.031101 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:58.031459 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:58.031521 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:58.530185 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:58.530279 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:58.530591 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:59.030603 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:59.030673 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:59.031011 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:59.530181 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:59.530254 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:59.530556 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:00.031130 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:00.031217 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:00.031532 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:00.031582 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:00.530232 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:00.530306 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:00.530652 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:01.030118 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:01.030193 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:01.030459 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:01.530122 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:01.530201 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:01.530574 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:02.030319 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:02.030407 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:02.030755 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:02.530195 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:02.530276 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:02.530558 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:02.530607 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:03.030232 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:03.030305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:03.030654 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:03.530352 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:03.530433 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:03.530775 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:04.030460 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:04.030552 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:04.030847 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:04.530572 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:04.530659 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:04.530971 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:04.531027 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:05.031015 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:05.031091 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:05.031381 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:05.531137 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:05.531210 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:05.531480 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:06.030204 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:06.030287 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:06.030672 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:06.530230 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:06.530312 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:06.530661 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:06.762229 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:41:06.820073 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:06.820109 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:41:06.820128 2968376 retry.go:31] will retry after 35.040877283s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:41:07.030604 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:07.030693 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:07.030967 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:07.031017 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:07.530709 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:07.530791 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:07.531216 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:08.030859 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:08.030956 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:08.031280 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:08.531028 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:08.531110 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:08.531372 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:09.030137 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:09.030210 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:09.030518 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:09.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:09.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:09.530639 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:09.530700 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:10.030448 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:10.030530 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:10.030820 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:10.530483 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:10.530577 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:10.530870 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:11.030249 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:11.030346 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:11.030673 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:11.530364 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:11.530439 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:11.530704 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:11.530760 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:12.030256 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:12.030329 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:12.030660 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:12.530205 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:12.530277 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:12.530608 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:13.030312 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:13.030400 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:13.030672 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:13.530341 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:13.530415 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:13.530818 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:13.530880 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:14.030265 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:14.030338 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:14.030678 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:14.530384 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:14.530453 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:14.530771 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:15.030787 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:15.030877 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:15.031291 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:15.531114 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:15.531196 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:15.531528 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:15.531590 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:16.030220 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:16.030296 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:16.030573 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:16.530300 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:16.530383 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:16.530739 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:17.030458 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:17.030539 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:17.030882 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:17.368346 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:41:17.428304 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:17.431873 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:41:17.431904 2968376 retry.go:31] will retry after 38.363968078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:41:17.531154 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:17.531231 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:17.531502 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:18.030234 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:18.030352 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:18.030774 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:18.030859 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:18.530515 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:18.530607 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:18.530942 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:19.030903 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:19.030980 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:19.031301 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:19.530780 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:19.530855 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:19.531233 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:20.031004 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:20.031081 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:20.031456 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:20.031515 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:20.530158 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:20.530242 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:20.530554 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:21.030247 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:21.030339 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:21.030668 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:21.530378 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:21.530474 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:21.530782 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:22.030443 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:22.030541 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:22.030864 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:22.530263 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:22.530337 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:22.530672 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:22.530725 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:23.030389 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:23.030466 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:23.030819 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:23.530513 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:23.530591 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:23.530877 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:24.030399 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:24.030471 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:24.030823 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:24.530238 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:24.530307 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:24.530641 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:25.030797 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:25.030872 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:25.031158 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:25.031215 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:25.530943 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:25.531023 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:25.531343 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:26.031163 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:26.031243 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:26.031563 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:26.530221 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:26.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:26.530613 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:27.030270 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:27.030340 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:27.030646 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:27.530262 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:27.530344 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:27.530672 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:27.530735 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:28.030374 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:28.030450 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:28.030789 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:28.530216 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:28.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:28.530634 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:29.030406 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:29.030498 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:29.030839 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:29.530201 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:29.530270 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:29.530587 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:30.030598 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:30.030723 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:30.031102 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:30.031168 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:30.530947 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:30.531019 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:30.531339 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:31.031114 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:31.031177 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:31.031431 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:31.531219 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:31.531303 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:31.531630 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:32.030207 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:32.030291 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:32.030641 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:32.530182 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:32.530258 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:32.530539 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:32.530590 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:33.030266 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:33.030347 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:33.030749 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:33.530421 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:33.530501 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:33.530853 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:34.030212 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:34.030287 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:34.030655 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:34.530281 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:34.530361 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:34.530710 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:34.530814 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:35.030611 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:35.030685 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:35.031034 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:35.530333 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:35.530402 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:35.530717 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:36.030300 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:36.030388 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:36.030750 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:36.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:36.530286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:36.530608 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:37.030333 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:37.030420 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:37.030757 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:37.030808 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:37.530515 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:37.530586 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:37.530947 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:38.030773 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:38.030848 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:38.031196 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:38.530440 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:38.530514 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:38.530796 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:39.030742 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:39.030829 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:39.031155 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:39.031225 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:39.530944 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:39.531015 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:39.531346 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:40.031118 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:40.031205 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:40.031497 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:40.530158 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:40.530248 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:40.530609 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:41.030336 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:41.030425 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:41.030757 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:41.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:41.530274 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:41.530533 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:41.530571 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:41.862178 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:41:41.923706 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:41.923759 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:41.923872 2968376 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 10:41:42.031033 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:42.031113 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:42.031454 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:42.530179 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:42.530261 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:42.530593 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:43.030183 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:43.030252 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:43.030559 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:43.530235 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:43.530318 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:43.530640 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:43.530702 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:44.030269 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:44.030350 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:44.030675 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:44.530279 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:44.530349 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:44.530638 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:45.030563 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:45.030653 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:45.031039 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:45.530936 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:45.531014 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:45.531379 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:45.531437 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:46.030694 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:46.030768 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:46.031031 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:46.530498 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:46.530586 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:46.530955 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:47.030756 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:47.030830 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:47.031181 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:47.530955 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:47.531021 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:47.531344 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:48.031149 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:48.031228 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:48.031596 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:48.031656 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:48.530229 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:48.530311 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:48.530650 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:49.030360 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:49.030435 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:49.030716 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:49.530373 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:49.530448 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:49.530795 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:50.030856 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:50.030945 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:50.031309 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:50.531043 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:50.531111 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:50.531380 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:50.531421 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:51.030173 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:51.030254 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:51.030586 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:51.530279 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:51.530362 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:51.530687 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:52.030399 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:52.030478 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:52.030876 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:52.530578 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:52.530651 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:52.531025 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:53.030854 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:53.030938 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:53.031290 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:53.031364 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:53.531095 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:53.531182 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:53.531545 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:54.030257 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:54.030339 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:54.030711 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:54.530208 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:54.530286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:54.534934 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 10:41:55.030911 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:55.031006 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:55.031280 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:55.530658 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:55.530757 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:55.531092 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:55.531146 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:55.796501 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:41:55.858122 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:55.858175 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:55.858259 2968376 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 10:41:55.863014 2968376 out.go:179] * Enabled addons: 
	I1217 10:41:55.865747 2968376 addons.go:530] duration metric: took 1m56.871522842s for enable addons: enabled=[]
	I1217 10:41:56.030483 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:56.030561 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:56.030907 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:56.530592 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:56.530668 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:56.530973 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:57.030256 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:57.030336 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:57.030717 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:57.530234 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:57.530308 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:57.530641 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:58.033611 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:58.033711 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:58.033996 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:58.034053 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:58.530276 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:58.530381 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:58.530759 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:59.030773 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:59.030845 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:59.031207 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:59.531008 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:59.531115 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:59.531404 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:00.030325 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:00.030471 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:00.030856 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:00.530785 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:00.530901 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:00.531226 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:00.531288 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:01.030976 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:01.031043 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:01.031299 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:01.531109 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:01.531187 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:01.531522 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:02.030231 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:02.030334 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:02.030695 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:02.530421 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:02.530490 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:02.530829 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:03.030527 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:03.030623 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:03.030985 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:03.031044 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:03.530805 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:03.530890 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:03.531241 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:04.030644 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:04.030719 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:04.031014 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:04.530743 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:04.530821 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:04.531126 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:05.030982 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:05.031061 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:05.031449 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:05.031509 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:05.530161 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:05.530231 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:05.530503 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:06.030268 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:06.030373 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:06.030811 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:06.530502 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:06.530577 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:06.530933 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:07.030646 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:07.030722 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:07.031021 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:07.530377 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:07.530455 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:07.530792 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:07.530847 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:08.030509 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:08.030589 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:08.030943 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:08.530625 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:08.530698 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:08.530961 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:09.030865 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:09.030938 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:09.031271 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:09.531064 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:09.531145 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:09.531546 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:09.531604 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:10.030184 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:10.030265 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:10.030604 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:10.530301 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:10.530388 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:10.530737 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:11.030319 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:11.030395 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:11.030731 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:11.530208 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:11.530286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:11.530559 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:12.030254 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:12.030339 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:12.030671 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:12.030728 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:12.530216 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:12.530298 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:12.530650 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:13.030199 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:13.030289 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:13.030609 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:13.530174 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:13.530251 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:13.530593 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:14.030325 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:14.030403 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:14.030742 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:14.030815 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:14.530197 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:14.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:14.530595 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:15.030677 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:15.030769 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:15.031176 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:15.530953 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:15.531037 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:15.531372 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:16.030632 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:16.030705 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:16.031041 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:16.031095 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:16.530824 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:16.530899 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:16.531227 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:17.031078 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:17.031158 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:17.031507 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:17.530212 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:17.530279 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:17.530603 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:18.030263 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:18.030383 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:18.030909 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:18.530209 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:18.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:18.530632 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:18.530733 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:19.030297 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:19.030368 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:19.030628 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:19.530369 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:19.530456 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:19.530831 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:20.030743 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:20.030860 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:20.031293 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:20.531060 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:20.531143 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:20.531416 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:20.531464 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:21.030175 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:21.030250 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:21.030599 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:21.530297 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:21.530372 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:21.530710 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:22.030207 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:22.030288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:22.030595 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:22.530236 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:22.530323 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:22.530665 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:23.030258 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:23.030337 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:23.030699 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:23.030756 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:23.530408 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:23.530481 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:23.530771 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:24.030482 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:24.030556 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:24.030886 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:24.530220 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:24.530300 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:24.530624 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:25.030607 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:25.030684 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:25.030955 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:25.030997 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:25.530792 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:25.530868 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:25.531224 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:26.031041 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:26.031118 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:26.031467 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:26.530812 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:26.530894 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:26.531163 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:27.030944 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:27.031023 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:27.031350 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:27.031410 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:27.531121 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:27.531199 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:27.531551 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:28.030233 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:28.030315 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:28.030645 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:28.530224 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:28.530306 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:28.530690 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:29.030400 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:29.030474 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:29.030789 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:29.530188 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:29.530261 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:29.530523 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:29.530575 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:30.030527 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:30.030608 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:30.030914 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:30.530157 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:30.530235 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:30.530505 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:31.030212 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:31.030285 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:31.030567 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:31.530218 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:31.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:31.530647 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:31.530702 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:32.030406 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:32.030487 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:32.030812 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:32.530487 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:32.530568 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:32.530890 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:33.030273 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:33.030372 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:33.030696 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:33.530243 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:33.530324 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:33.530662 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:34.030960 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:34.031034 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:34.031331 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:34.031398 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:34.531153 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:34.531229 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:34.531528 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:35.031148 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:35.031227 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:35.031548 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:35.530192 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:35.530297 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:35.530620 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:36.030260 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:36.030343 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:36.030695 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:36.530255 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:36.530338 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:36.530702 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:36.530761 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:37.030424 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:37.030507 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:37.030895 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:37.530587 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:37.530660 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:37.530982 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:38.030793 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:38.030877 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:38.031209 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:38.530652 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:38.530746 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:38.531014 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:38.531064 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:39.030920 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:39.030999 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:39.031358 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:39.530863 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:39.530944 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:39.531269 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:40.033571 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:40.033646 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:40.033999 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:40.530772 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:40.530895 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:40.531207 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:40.531255 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:41.030902 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:41.030976 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:41.031290 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:41.530836 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:41.530913 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:41.531177 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:42.031028 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:42.031104 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:42.031506 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:42.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:42.530290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:42.530602 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:43.030257 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:43.030326 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:43.030592 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:43.030635 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:43.530202 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:43.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:43.530608 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:44.030263 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:44.030345 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:44.030692 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:44.530382 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:44.530453 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:44.530758 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:45.030634 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:45.030711 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:45.031045 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:45.031092 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:45.530980 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:45.531053 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:45.531405 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:46.030984 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:46.031075 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:46.031347 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:46.531101 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:46.531172 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:46.531490 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:47.030218 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:47.030298 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:47.030674 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:47.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:47.530250 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:47.530516 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:47.530556 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:48.030265 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:48.030342 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:48.030699 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:48.530227 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:48.530301 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:48.530655 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:49.030362 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:49.030433 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:49.030712 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:49.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:49.530274 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:49.530611 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:49.530667 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:50.030450 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:50.030536 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:50.030902 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:50.530566 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:50.530643 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:50.530924 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:51.030624 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:51.030697 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:51.031040 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:51.530799 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:51.530874 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:51.531195 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:51.531260 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:52.030967 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:52.031041 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:52.031382 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:52.530130 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:52.530238 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:52.530576 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:53.030280 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:53.030358 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:53.030697 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:53.530178 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:53.530255 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:53.530583 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:54.030277 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:54.030377 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:54.030696 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:54.030750 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:54.530411 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:54.530489 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:54.530806 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:55.030697 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:55.030769 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:55.031047 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:55.530468 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:55.530547 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:55.530914 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:56.030257 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:56.030332 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:56.030675 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:56.530365 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:56.530439 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:56.530709 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:56.530760 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:57.030475 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:57.030547 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:57.030868 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:57.530563 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:57.530633 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:57.530984 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:58.030672 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:58.030747 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:58.031048 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:58.530818 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:58.530891 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:58.531173 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:58.531217 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:59.030892 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:59.030968 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:59.031306 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:59.531057 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:59.531132 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:59.531418 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:00.031208 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:00.031315 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:00.031840 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:00.530199 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:00.530272 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:00.530592 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:01.030181 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:01.030256 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:01.030519 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:01.030564 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:01.530209 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:01.530280 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:01.530610 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:02.030330 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:02.030414 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:02.030762 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:02.530189 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:02.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:02.530545 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:03.030260 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:03.030353 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:03.030693 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:03.030751 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:03.530210 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:03.530286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:03.530616 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:04.030191 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:04.030264 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:04.030560 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:04.530232 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:04.530306 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:04.530651 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:05.030671 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:05.030756 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:05.031092 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:05.031143 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:05.530868 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:05.530936 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:05.531271 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:06.031105 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:06.031190 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:06.031557 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:06.530208 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:06.530279 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:06.530617 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:07.030293 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:07.030367 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:07.030644 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:07.530306 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:07.530387 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:07.530723 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:07.530782 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:08.030495 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:08.030574 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:08.030934 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:08.530198 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:08.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:08.530588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:09.030617 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:09.030710 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:09.031007 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:09.530235 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:09.530313 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:09.530669 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:10.030528 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:10.030602 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:10.030907 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:10.030956 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:10.530627 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:10.530703 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:10.531097 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:11.030974 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:11.031061 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:11.031452 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:11.530139 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:11.530212 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:11.530499 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:12.030241 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:12.030318 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:12.030668 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:12.530367 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:12.530446 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:12.530785 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:12.530838 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:13.030512 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:13.030590 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:13.030926 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:13.530231 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:13.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:13.530675 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:14.030433 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:14.030532 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:14.030898 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:14.530191 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:14.530265 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:14.530525 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:15.030561 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:15.030642 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:15.031035 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:15.031108 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:15.530771 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:15.530851 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:15.531186 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:16.030941 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:16.031058 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:16.031367 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:16.531127 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:16.531204 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:16.531551 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:17.030268 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:17.030359 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:17.030730 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:17.530432 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:17.530503 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:17.530762 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:17.530802 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:18.030287 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:18.030373 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:18.030726 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:18.530417 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:18.530491 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:18.530823 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:19.030615 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:19.030686 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:19.030957 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:19.530751 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:19.530822 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:19.531145 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:19.531219 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:20.030996 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:20.031081 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:20.031466 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:20.530134 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:20.530218 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:20.530480 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:21.030224 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:21.030315 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:21.030716 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:21.530405 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:21.530495 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:21.530849 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:22.030215 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:22.030290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:22.030563 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:22.030612 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:22.530260 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:22.530336 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:22.530660 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:23.030248 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:23.030322 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:23.030668 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:23.530351 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:23.530432 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:23.530727 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:24.030221 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:24.030298 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:24.030639 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:24.030700 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:24.530199 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:24.530274 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:24.530592 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:25.030528 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:25.030601 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:25.030897 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:25.530568 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:25.530644 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:25.531019 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:26.030843 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:26.030932 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:26.031265 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:26.031321 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:26.531018 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:26.531091 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:26.531351 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:27.031180 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:27.031252 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:27.031581 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:27.530252 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:27.530331 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:27.530667 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:28.030211 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:28.030284 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:28.030564 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:28.530284 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:28.530361 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:28.530659 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:28.530707 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:29.030650 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:29.030721 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:29.031052 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:29.530749 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:29.530823 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:29.531149 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:30.031046 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:30.031137 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:30.031519 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:30.530216 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:30.530313 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:30.530684 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:30.530743 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:31.030206 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:31.030288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:31.030560 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:31.530234 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:31.530321 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:31.530704 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:32.030432 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:32.030511 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:32.030861 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:32.530557 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:32.530633 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:32.530986 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:32.531075 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:33.030858 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:33.030935 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:33.031277 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:33.531109 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:33.531187 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:33.531577 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:34.030306 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:34.030382 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:34.030753 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:34.530223 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:34.530319 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:34.530708 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:35.030567 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:35.030667 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:35.031054 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:35.031114 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:35.530355 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:35.530425 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:35.530748 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:36.030259 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:36.030360 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:36.030744 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:36.530439 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:36.530514 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:36.530839 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:37.030151 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:37.030234 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:37.030595 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:37.530363 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:37.530442 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:37.530778 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:37.530853 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:38.030262 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:38.030348 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:38.030702 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:38.530390 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:38.530461 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:38.530733 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:39.030687 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:39.030760 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:39.031111 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:39.530923 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:39.531001 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:39.531339 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:39.531397 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:40.030955 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:40.031030 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:40.031319 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:40.531063 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:40.531139 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:40.531495 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:41.031163 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:41.031238 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:41.031591 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:41.530249 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:41.530323 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:41.530587 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:42.030362 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:42.030453 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:42.030854 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:42.030924 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:42.530577 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:42.530657 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:42.531023 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:43.030790 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:43.030866 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:43.031190 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:43.530930 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:43.531021 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:43.531357 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:44.031028 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:44.031107 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:44.031450 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:44.031512 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:44.530164 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:44.530233 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:44.530544 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:45.031170 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:45.031261 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:45.031590 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:45.530211 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:45.530293 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:45.530682 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:46.030354 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:46.030422 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:46.030698 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:46.530232 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:46.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:46.530634 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:46.530696 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:47.030366 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:47.030444 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:47.030742 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:47.530404 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:47.530478 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:47.530752 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:48.030496 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:48.030575 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:48.030882 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:48.530214 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:48.530292 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:48.530633 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:49.030376 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:49.030444 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:49.030775 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:49.030832 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:49.530474 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:49.530545 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:49.530877 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:50.030914 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:50.030991 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:50.031360 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:50.531113 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:50.531187 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:50.531458 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:51.030169 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:51.030240 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:51.030588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:51.530249 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:51.530328 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:51.530647 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:51.530704 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:52.030197 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:52.030269 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:52.030691 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:52.530433 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:52.530506 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:52.530821 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:53.030494 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:53.030577 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:53.030964 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:53.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:53.530257 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:53.530535 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:54.030277 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:54.030388 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:54.030914 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:54.030974 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:54.530212 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:54.530304 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:54.530629 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:55.030620 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:55.030692 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:55.030975 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:55.530238 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:55.530313 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:55.530627 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:56.030306 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:56.030401 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:56.030753 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:56.530440 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:56.530509 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:56.530781 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:56.530820 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:57.030493 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:57.030591 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:57.030923 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:57.530749 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:57.530825 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:57.531153 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:58.030917 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:58.030996 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:58.031309 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:58.530552 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:58.530658 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:58.531261 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:58.531332 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:59.031089 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:59.031172 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:59.031521 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:59.530216 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:59.530308 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:59.530593 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:00.030748 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:00.030831 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:00.031142 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:00.530904 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:00.530989 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:00.531375 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:00.531435 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:01.030988 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:01.031055 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:01.031330 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:01.531137 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:01.531217 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:01.531543 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:02.030239 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:02.030312 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:02.030660 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:02.530197 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:02.530277 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:02.530553 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:03.030250 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:03.030323 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:03.030669 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:03.030723 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:03.530375 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:03.530452 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:03.530800 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:04.030214 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:04.030295 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:04.030627 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:04.530317 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:04.530420 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:04.530765 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:05.030596 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:05.030677 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:05.031020 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:05.031084 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:05.530367 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:05.530441 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:05.530720 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:06.030267 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:06.030359 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:06.030720 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:06.530416 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:06.530489 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:06.530819 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:07.030188 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:07.030264 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:07.030539 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:07.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:07.530283 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:07.530594 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:07.530643 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:08.030372 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:08.030476 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:08.030889 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:08.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:08.530253 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:08.530521 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:09.031107 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:09.031182 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:09.031487 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:09.530173 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:09.530246 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:09.530588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:10.030187 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:10.030261 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:10.030583 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:10.030632 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:10.530212 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:10.530285 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:10.530616 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:11.030321 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:11.030401 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:11.030717 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:11.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:11.530260 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:11.530567 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:12.030265 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:12.030345 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:12.030692 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:12.030750 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:12.530410 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:12.530491 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:12.530831 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:13.030508 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:13.030583 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:13.030845 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:13.530215 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:13.530293 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:13.530600 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:14.030276 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:14.030355 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:14.030683 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:14.530187 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:14.530269 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:14.530570 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:14.530619 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:15.030634 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:15.030728 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:15.031132 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:15.530902 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:15.530978 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:15.531320 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:16.031133 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:16.031225 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:16.031608 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:16.530217 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:16.530290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:16.530631 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:16.530691 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:17.030347 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:17.030434 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:17.030771 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:17.530190 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:17.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:17.530547 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:18.030304 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:18.030396 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:18.030847 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:18.530228 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:18.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:18.530658 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:19.030405 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:19.030487 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:19.030753 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:19.030793 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:19.530210 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:19.530287 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:19.530630 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:20.030471 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:20.030552 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:20.030904 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:20.530566 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:20.530649 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:20.530928 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:21.030264 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:21.030341 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:21.030695 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:21.530387 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:21.530465 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:21.530798 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:21.530854 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:22.030489 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:22.030560 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:22.030836 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:22.530229 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:22.530311 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:22.530651 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:23.030359 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:23.030435 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:23.030790 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:23.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:23.530257 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:23.530538 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:24.030285 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:24.030362 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:24.030720 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:24.030784 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:24.530459 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:24.530561 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:24.530890 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:25.030748 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:25.030819 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:25.031092 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:25.530805 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:25.530886 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:25.531202 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:26.030981 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:26.031066 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:26.031447 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:26.031506 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:26.530164 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:26.530240 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:26.530509 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:27.030231 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:27.030322 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:27.030709 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:27.530233 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:27.530309 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:27.530655 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:28.030347 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:28.030421 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:28.030710 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:28.530223 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:28.530325 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:28.530633 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:28.530685 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:29.030667 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:29.030745 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:29.031082 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:29.530788 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:29.530865 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:29.531130 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:30.031110 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:30.031196 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:30.031505 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:30.530206 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:30.530283 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:30.530647 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:30.530712 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:31.030226 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:31.030304 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:31.030570 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:31.530239 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:31.530328 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:31.530675 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:32.030374 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:32.030452 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:32.030786 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:32.530472 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:32.530546 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:32.530828 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:32.530875 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:33.030256 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:33.030344 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:33.030721 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:33.530199 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:33.530277 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:33.530635 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:34.030369 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:34.030448 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:34.030741 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:34.530447 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:34.530523 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:34.530886 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:34.530946 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:35.030720 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:35.030802 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:35.031141 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:35.530926 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:35.530998 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:35.531261 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:36.031086 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:36.031162 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:36.031504 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:36.530199 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:36.530272 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:36.530613 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:37.030185 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:37.030264 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:37.030544 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:37.030595 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:37.530230 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:37.530306 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:37.530649 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:38.030265 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:38.030343 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:38.030718 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:38.530403 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:38.530475 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:38.530749 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:39.030746 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:39.030820 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:39.031148 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:39.031206 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:39.530917 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:39.530990 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:39.531311 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:40.031557 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:40.031645 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:40.032005 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:40.530747 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:40.530827 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:40.531128 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:41.030902 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:41.030984 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:41.031306 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:41.031363 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:41.531112 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:41.531189 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:41.531510 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:42.030251 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:42.030333 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:42.030713 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:42.530283 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:42.530359 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:42.530684 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:43.030191 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:43.030262 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:43.030522 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:43.530207 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:43.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:43.530656 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:43.530713 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:44.030378 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:44.030455 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:44.030782 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:44.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:44.530264 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:44.530529 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:45.030577 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:45.030660 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:45.030993 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:45.530696 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:45.530777 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:45.531097 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:45.531153 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:46.030857 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:46.030933 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:46.031262 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:46.530790 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:46.530867 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:46.531226 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:47.031053 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:47.031133 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:47.031466 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:47.530800 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:47.530875 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:47.531148 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:47.531197 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:48.030951 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:48.031025 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:48.031389 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:48.531201 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:48.531290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:48.531669 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:49.030366 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:49.030437 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:49.030705 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:49.530219 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:49.530300 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:49.530631 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:50.030617 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:50.030700 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:50.031089 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:50.031150 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:50.530889 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:50.530962 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:50.531299 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:51.031099 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:51.031173 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:51.031503 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:51.530225 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:51.530303 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:51.530635 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:52.030871 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:52.030945 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:52.031227 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:52.031267 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:52.531049 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:52.531125 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:52.531452 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:53.030183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:53.030272 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:53.030616 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:53.530338 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:53.530458 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:53.530734 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:54.030429 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:54.030506 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:54.030853 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:54.530381 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:54.530458 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:54.530812 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:54.530872 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:55.030902 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:55.030979 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:55.031278 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:55.531074 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:55.531160 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:55.531517 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:56.030253 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:56.030336 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:56.030686 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:56.530868 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:56.530948 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:56.531219 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:56.531259 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:57.031079 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:57.031159 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:57.031538 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:57.530232 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:57.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:57.530630 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:58.030200 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:58.030274 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:58.030548 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:58.530218 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:58.530297 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:58.530631 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:59.030407 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:59.030481 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:59.030824 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:59.030888 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:59.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:59.530255 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:59.530525 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:00.030580 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:00.030665 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:00.031061 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:00.536031 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:00.536130 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:00.536497 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:01.030384 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:01.030461 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:01.030805 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:01.530510 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:01.530586 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:01.531038 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:01.531093 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:02.030519 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:02.030596 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:02.030885 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:02.530601 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:02.530679 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:02.531024 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:03.030770 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:03.030845 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:03.031172 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:03.530923 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:03.531000 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:03.531348 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:03.531399 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:04.031188 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:04.031267 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:04.031578 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:04.530221 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:04.530301 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:04.530665 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:05.030434 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:05.030511 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:05.030794 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:05.530486 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:05.530562 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:05.530936 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:06.030547 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:06.030629 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:06.031023 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:06.031086 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:06.530803 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:06.530879 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:06.531191 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:07.031034 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:07.031122 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:07.031472 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:07.530868 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:07.530945 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:07.531250 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:08.031006 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:08.031085 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:08.031378 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:08.031422 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:08.530162 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:08.530246 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:08.530602 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:09.030388 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:09.030461 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:09.030757 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:09.530206 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:09.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:09.530546 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:10.031113 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:10.031190 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:10.031553 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:10.031610 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:10.530237 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:10.530307 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:10.530634 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:11.030979 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:11.031054 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:11.031384 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:11.531138 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:11.531212 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:11.531564 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:12.030178 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:12.030257 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:12.030588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:12.530188 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:12.530255 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:12.530534 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:12.530573 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:13.030280 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:13.030360 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:13.030766 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:13.530219 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:13.530304 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:13.530671 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:14.030210 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:14.030285 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:14.030552 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:14.530221 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:14.530293 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:14.530608 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:14.530657 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:15.030494 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:15.030575 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:15.030910 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:15.530437 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:15.530554 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:15.530900 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:16.030216 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:16.030305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:16.030644 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:16.530338 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:16.530413 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:16.530783 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:16.530841 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:17.030494 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:17.030561 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:17.030881 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:17.530218 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:17.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:17.530621 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:18.030226 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:18.030315 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:18.030655 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:18.530196 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:18.530278 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:18.530553 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:19.031062 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:19.031145 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:19.031472 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:19.031531 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:19.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:19.530282 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:19.530610 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:20.030544 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:20.030624 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:20.030925 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:20.530202 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:20.530275 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:20.530644 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:21.030361 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:21.030463 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:21.030812 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:21.530486 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:21.530558 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:21.530871 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:21.530921 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:22.030256 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:22.030338 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:22.030680 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:22.530233 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:22.530313 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:22.530661 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:23.030227 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:23.030300 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:23.030564 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:23.530250 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:23.530342 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:23.530762 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:24.030398 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:24.030590 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:24.031036 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:24.031106 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:24.530825 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:24.530892 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:24.531165 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:25.031145 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:25.031231 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:25.031590 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:25.530287 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:25.530385 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:25.530787 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:26.030475 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:26.030551 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:26.030935 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:26.530656 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:26.530743 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:26.531109 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:26.531165 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:27.030982 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:27.031070 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:27.031412 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:27.530790 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:27.530858 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:27.531125 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:28.030995 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:28.031074 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:28.031452 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:28.530173 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:28.530254 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:28.530600 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:29.030339 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:29.030432 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:29.030724 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:29.030767 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:29.530501 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:29.530583 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:29.530943 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:30.030872 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:30.030956 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:30.031277 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:30.531026 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:30.531096 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:30.531388 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:31.031173 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:31.031248 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:31.031592 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:31.031655 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:31.530219 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:31.530290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:31.530619 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:32.030304 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:32.030380 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:32.030665 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:32.530207 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:32.530286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:32.530634 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:33.030347 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:33.030429 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:33.030767 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:33.530186 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:33.530259 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:33.530528 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:33.530569 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:34.030234 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:34.030320 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:34.030648 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:34.530224 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:34.530303 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:34.530668 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:35.030514 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:35.030598 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:35.030879 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:35.530539 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:35.530621 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:35.530944 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:35.530999 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:36.030792 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:36.030868 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:36.031197 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:36.530952 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:36.531027 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:36.531293 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:37.031128 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:37.031222 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:37.031596 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:37.530210 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:37.530284 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:37.530618 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:38.030192 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:38.030278 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:38.030552 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:38.030630 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:38.530218 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:38.530293 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:38.530633 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:39.030659 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:39.030738 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:39.031056 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:39.530839 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:39.530914 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:39.531181 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:40.031117 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:40.031198 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:40.031558 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:40.031631 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:40.530210 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:40.530292 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:40.530625 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:41.030178 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:41.030256 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:41.030535 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:41.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:41.530290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:41.530641 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:42.030366 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:42.030455 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:42.030891 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:42.530441 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:42.530519 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:42.530792 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:42.530833 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:43.030494 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:43.030585 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:43.030904 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:43.530228 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:43.530306 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:43.530630 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:44.030307 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:44.030381 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:44.030707 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:44.530227 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:44.530296 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:44.530588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:45.031339 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:45.031427 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:45.031745 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:45.031809 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:45.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:45.530276 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:45.530545 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:46.030239 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:46.030321 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:46.030687 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:46.530366 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:46.530439 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:46.530762 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:47.030423 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:47.030519 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:47.030809 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:47.530502 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:47.530580 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:47.530914 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:47.530970 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:48.030658 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:48.030731 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:48.031047 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:48.530357 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:48.530426 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:48.530764 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:49.030805 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:49.030882 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:49.031204 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:49.530982 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:49.531053 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:49.531372 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:49.531427 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:50.031030 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:50.031115 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:50.031532 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:50.530205 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:50.530299 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:50.530623 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:51.030207 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:51.030286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:51.030614 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:51.530323 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:51.530401 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:51.530711 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:52.030254 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:52.030329 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:52.030627 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:52.030687 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:52.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:52.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:52.530659 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:53.030195 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:53.030278 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:53.030640 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:53.530268 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:53.530359 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:53.530765 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:54.030501 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:54.030589 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:54.030906 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:54.030956 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:54.530370 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:54.530458 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:54.530775 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:55.030784 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:55.030866 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:55.031248 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:55.531028 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:55.531111 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:55.531412 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:56.031149 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:56.031232 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:56.031533 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:56.031587 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:56.530201 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:56.530282 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:56.530600 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:57.030260 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:57.030338 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:57.030673 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:57.530209 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:57.530285 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:57.530565 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:58.030268 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:58.030347 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:58.030688 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:58.530418 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:58.530493 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:58.530876 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:58.530935 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:59.030221 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:59.030349 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:59.030710 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:59.530411 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:59.530486 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:59.530845 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:46:00.030785 2968376 type.go:168] "Request Body" body=""
	I1217 10:46:00.030868 2968376 node_ready.go:38] duration metric: took 6m0.00085226s for node "functional-232588" to be "Ready" ...
	I1217 10:46:00.039967 2968376 out.go:203] 
	W1217 10:46:00.043066 2968376 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 10:46:00.043095 2968376 out.go:285] * 
	W1217 10:46:00.047185 2968376 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 10:46:00.056487 2968376 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.457543735Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.457644238Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.457744256Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.457820841Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.457879235Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.457938007Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.457996279Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.458061483Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.458126967Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.458208163Z" level=info msg="Connect containerd service"
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.458558284Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.459206966Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.475859249Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.475925241Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.475972108Z" level=info msg="Start subscribing containerd event"
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.476024300Z" level=info msg="Start recovering state"
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.528406781Z" level=info msg="Start event monitor"
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.528633279Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.528730630Z" level=info msg="Start streaming server"
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.528823026Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.529043731Z" level=info msg="runtime interface starting up..."
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.529125706Z" level=info msg="starting plugins..."
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.529191904Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 10:39:57 functional-232588 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.529865783Z" level=info msg="containerd successfully booted in 0.097082s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:46:02.036549    8459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:46:02.037349    8459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:46:02.039071    8459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:46:02.039663    8459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:46:02.041345    8459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +26.532481] overlayfs: idmapped layers are currently not supported
	[Dec17 09:26] overlayfs: idmapped layers are currently not supported
	[Dec17 09:27] overlayfs: idmapped layers are currently not supported
	[Dec17 09:29] overlayfs: idmapped layers are currently not supported
	[Dec17 09:31] overlayfs: idmapped layers are currently not supported
	[Dec17 09:41] overlayfs: idmapped layers are currently not supported
	[Dec17 09:43] overlayfs: idmapped layers are currently not supported
	[Dec17 09:44] overlayfs: idmapped layers are currently not supported
	[  +5.066669] overlayfs: idmapped layers are currently not supported
	[ +38.827173] overlayfs: idmapped layers are currently not supported
	[Dec17 09:45] overlayfs: idmapped layers are currently not supported
	[Dec17 09:46] overlayfs: idmapped layers are currently not supported
	[Dec17 09:48] overlayfs: idmapped layers are currently not supported
	[  +5.468161] overlayfs: idmapped layers are currently not supported
	[Dec17 09:49] overlayfs: idmapped layers are currently not supported
	[  +4.263444] overlayfs: idmapped layers are currently not supported
	[Dec17 09:50] overlayfs: idmapped layers are currently not supported
	[Dec17 10:07] overlayfs: idmapped layers are currently not supported
	[Dec17 10:08] overlayfs: idmapped layers are currently not supported
	[Dec17 10:10] overlayfs: idmapped layers are currently not supported
	[Dec17 10:11] overlayfs: idmapped layers are currently not supported
	[Dec17 10:13] overlayfs: idmapped layers are currently not supported
	[Dec17 10:15] overlayfs: idmapped layers are currently not supported
	[Dec17 10:16] overlayfs: idmapped layers are currently not supported
	[Dec17 10:21] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 10:46:02 up 16:28,  0 user,  load average: 0.33, 0.26, 0.77
	Linux functional-232588 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 10:45:59 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:45:59 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 808.
	Dec 17 10:45:59 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:45:59 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:45:59 functional-232588 kubelet[8349]: E1217 10:45:59.802077    8349 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:45:59 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:45:59 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:46:00 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 809.
	Dec 17 10:46:00 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:00 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:00 functional-232588 kubelet[8354]: E1217 10:46:00.578565    8354 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:46:00 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:46:00 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:46:01 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 810.
	Dec 17 10:46:01 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:01 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:01 functional-232588 kubelet[8374]: E1217 10:46:01.313373    8374 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:46:01 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:46:01 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:46:01 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 17 10:46:01 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:02 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:02 functional-232588 kubelet[8463]: E1217 10:46:02.075789    8463 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:46:02 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:46:02 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588: exit status 2 (393.419897ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-232588" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (368.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (2.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-232588 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-232588 get po -A: exit status 1 (63.367342ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-232588 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-232588 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-232588 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-232588
helpers_test.go:244: (dbg) docker inspect functional-232588:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55",
	        "Created": "2025-12-17T10:31:38.417629873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2962990,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T10:31:38.484538313Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/hostname",
	        "HostsPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/hosts",
	        "LogPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55-json.log",
	        "Name": "/functional-232588",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-232588:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-232588",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55",
	                "LowerDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813-init/diff:/var/lib/docker/overlay2/aa1c3cb837db05afa9c265c464cc269fa9c11658f422c1c8858e1287ac952f12/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-232588",
	                "Source": "/var/lib/docker/volumes/functional-232588/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-232588",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-232588",
	                "name.minikube.sigs.k8s.io": "functional-232588",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cf91fdea6bf1c59282af014fad74b29d2456698ebca9b6be8c9685054b7d7df4",
	            "SandboxKey": "/var/run/docker/netns/cf91fdea6bf1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35733"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35734"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35737"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35735"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35736"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-232588": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:06:f9:5f:98:3e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf7a63e100f8f97f0cb760b53960a5eaeb1f5054bead79442486fc1d51c01ab7",
	                    "EndpointID": "a3f4d6de946fb68269c7790dce129934f895a840ec5cebbe87fc0d49cb575c44",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-232588",
	                        "f67a3fa8da99"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-232588 -n functional-232588
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232588 -n functional-232588: exit status 2 (303.166949ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-626013 ssh sudo cat /etc/ssl/certs/29245742.pem                                                                                                      │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image ls                                                                                                                                      │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ ssh            │ functional-626013 ssh sudo cat /usr/share/ca-certificates/29245742.pem                                                                                          │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image load --daemon kicbase/echo-server:functional-626013 --alsologtostderr                                                                   │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ ssh            │ functional-626013 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image ls                                                                                                                                      │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image save kicbase/echo-server:functional-626013 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image rm kicbase/echo-server:functional-626013 --alsologtostderr                                                                              │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image ls                                                                                                                                      │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ update-context │ functional-626013 update-context --alsologtostderr -v=2                                                                                                         │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ update-context │ functional-626013 update-context --alsologtostderr -v=2                                                                                                         │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ update-context │ functional-626013 update-context --alsologtostderr -v=2                                                                                                         │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image ls                                                                                                                                      │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image save --daemon kicbase/echo-server:functional-626013 --alsologtostderr                                                                   │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image ls --format short --alsologtostderr                                                                                                     │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image ls --format yaml --alsologtostderr                                                                                                      │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ ssh            │ functional-626013 ssh pgrep buildkitd                                                                                                                           │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │                     │
	│ image          │ functional-626013 image ls --format json --alsologtostderr                                                                                                      │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image ls --format table --alsologtostderr                                                                                                     │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image build -t localhost/my-image:functional-626013 testdata/build --alsologtostderr                                                          │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image          │ functional-626013 image ls                                                                                                                                      │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ delete         │ -p functional-626013                                                                                                                                            │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ start          │ -p functional-232588 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1           │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │                     │
	│ start          │ -p functional-232588 --alsologtostderr -v=8                                                                                                                     │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:39 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 10:39:54
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 10:39:54.887492 2968376 out.go:360] Setting OutFile to fd 1 ...
	I1217 10:39:54.887669 2968376 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:39:54.887679 2968376 out.go:374] Setting ErrFile to fd 2...
	I1217 10:39:54.887684 2968376 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:39:54.887953 2968376 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 10:39:54.888377 2968376 out.go:368] Setting JSON to false
	I1217 10:39:54.889321 2968376 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":58945,"bootTime":1765909050,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 10:39:54.889394 2968376 start.go:143] virtualization:  
	I1217 10:39:54.892820 2968376 out.go:179] * [functional-232588] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 10:39:54.896642 2968376 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 10:39:54.896710 2968376 notify.go:221] Checking for updates...
	I1217 10:39:54.900325 2968376 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 10:39:54.903432 2968376 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:39:54.906306 2968376 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 10:39:54.909105 2968376 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 10:39:54.911889 2968376 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 10:39:54.915217 2968376 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 10:39:54.915331 2968376 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 10:39:54.937972 2968376 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 10:39:54.938091 2968376 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:39:55.000760 2968376 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 10:39:54.991784263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:39:55.000879 2968376 docker.go:319] overlay module found
	I1217 10:39:55.005745 2968376 out.go:179] * Using the docker driver based on existing profile
	I1217 10:39:55.010762 2968376 start.go:309] selected driver: docker
	I1217 10:39:55.010794 2968376 start.go:927] validating driver "docker" against &{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:39:55.010914 2968376 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 10:39:55.011044 2968376 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:39:55.065164 2968376 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 10:39:55.056463493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:39:55.065569 2968376 cni.go:84] Creating CNI manager for ""
	I1217 10:39:55.065633 2968376 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 10:39:55.065694 2968376 start.go:353] cluster config:
	{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:39:55.070664 2968376 out.go:179] * Starting "functional-232588" primary control-plane node in "functional-232588" cluster
	I1217 10:39:55.073373 2968376 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 10:39:55.076286 2968376 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 10:39:55.079282 2968376 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 10:39:55.079315 2968376 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 10:39:55.079350 2968376 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1217 10:39:55.079358 2968376 cache.go:65] Caching tarball of preloaded images
	I1217 10:39:55.079437 2968376 preload.go:238] Found /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 10:39:55.079447 2968376 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1217 10:39:55.079550 2968376 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/config.json ...
	I1217 10:39:55.100219 2968376 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 10:39:55.100251 2968376 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 10:39:55.100265 2968376 cache.go:243] Successfully downloaded all kic artifacts
	I1217 10:39:55.100297 2968376 start.go:360] acquireMachinesLock for functional-232588: {Name:mkb7828f32963a62377c74058da795e63eb677f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 10:39:55.100355 2968376 start.go:364] duration metric: took 36.061µs to acquireMachinesLock for "functional-232588"
	I1217 10:39:55.100378 2968376 start.go:96] Skipping create...Using existing machine configuration
	I1217 10:39:55.100389 2968376 fix.go:54] fixHost starting: 
	I1217 10:39:55.100690 2968376 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:39:55.118322 2968376 fix.go:112] recreateIfNeeded on functional-232588: state=Running err=<nil>
	W1217 10:39:55.118352 2968376 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 10:39:55.121614 2968376 out.go:252] * Updating the running docker "functional-232588" container ...
	I1217 10:39:55.121666 2968376 machine.go:94] provisionDockerMachine start ...
	I1217 10:39:55.121762 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:55.140448 2968376 main.go:143] libmachine: Using SSH client type: native
	I1217 10:39:55.140568 2968376 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:39:55.140576 2968376 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 10:39:55.272992 2968376 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232588
	
	I1217 10:39:55.273058 2968376 ubuntu.go:182] provisioning hostname "functional-232588"
	I1217 10:39:55.273155 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:55.294100 2968376 main.go:143] libmachine: Using SSH client type: native
	I1217 10:39:55.294200 2968376 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:39:55.294209 2968376 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-232588 && echo "functional-232588" | sudo tee /etc/hostname
	I1217 10:39:55.433566 2968376 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232588
	
	I1217 10:39:55.433651 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:55.452012 2968376 main.go:143] libmachine: Using SSH client type: native
	I1217 10:39:55.452130 2968376 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:39:55.452152 2968376 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-232588' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-232588/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-232588' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 10:39:55.584734 2968376 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 10:39:55.584801 2968376 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 10:39:55.584835 2968376 ubuntu.go:190] setting up certificates
	I1217 10:39:55.584846 2968376 provision.go:84] configureAuth start
	I1217 10:39:55.584917 2968376 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232588
	I1217 10:39:55.602169 2968376 provision.go:143] copyHostCerts
	I1217 10:39:55.602226 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 10:39:55.602261 2968376 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 10:39:55.602273 2968376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 10:39:55.602347 2968376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 10:39:55.602482 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 10:39:55.602507 2968376 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 10:39:55.602512 2968376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 10:39:55.602540 2968376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 10:39:55.602588 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 10:39:55.602609 2968376 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 10:39:55.602618 2968376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 10:39:55.602651 2968376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 10:39:55.602701 2968376 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.functional-232588 san=[127.0.0.1 192.168.49.2 functional-232588 localhost minikube]
	I1217 10:39:55.859794 2968376 provision.go:177] copyRemoteCerts
	I1217 10:39:55.859877 2968376 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 10:39:55.859950 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:55.877144 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:55.974879 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 10:39:55.974962 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 10:39:55.992960 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 10:39:55.993024 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 10:39:56.017007 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 10:39:56.017075 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 10:39:56.039037 2968376 provision.go:87] duration metric: took 454.177473ms to configureAuth
	I1217 10:39:56.039062 2968376 ubuntu.go:206] setting minikube options for container-runtime
	I1217 10:39:56.039248 2968376 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 10:39:56.039255 2968376 machine.go:97] duration metric: took 917.583269ms to provisionDockerMachine
	I1217 10:39:56.039263 2968376 start.go:293] postStartSetup for "functional-232588" (driver="docker")
	I1217 10:39:56.039274 2968376 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 10:39:56.039330 2968376 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 10:39:56.039374 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:56.064674 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:56.164379 2968376 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 10:39:56.167903 2968376 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1217 10:39:56.167924 2968376 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1217 10:39:56.167929 2968376 command_runner.go:130] > VERSION_ID="12"
	I1217 10:39:56.167934 2968376 command_runner.go:130] > VERSION="12 (bookworm)"
	I1217 10:39:56.167939 2968376 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1217 10:39:56.167943 2968376 command_runner.go:130] > ID=debian
	I1217 10:39:56.167947 2968376 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1217 10:39:56.167952 2968376 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1217 10:39:56.167958 2968376 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1217 10:39:56.168026 2968376 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 10:39:56.168043 2968376 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 10:39:56.168054 2968376 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 10:39:56.168116 2968376 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 10:39:56.168193 2968376 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 10:39:56.168199 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> /etc/ssl/certs/29245742.pem
	I1217 10:39:56.168276 2968376 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts -> hosts in /etc/test/nested/copy/2924574
	I1217 10:39:56.168280 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts -> /etc/test/nested/copy/2924574/hosts
	I1217 10:39:56.168325 2968376 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2924574
	I1217 10:39:56.175992 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 10:39:56.194065 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts --> /etc/test/nested/copy/2924574/hosts (40 bytes)
	I1217 10:39:56.211618 2968376 start.go:296] duration metric: took 172.340234ms for postStartSetup
	I1217 10:39:56.211696 2968376 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 10:39:56.211740 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:56.229142 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:56.321408 2968376 command_runner.go:130] > 18%
	I1217 10:39:56.321497 2968376 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 10:39:56.325775 2968376 command_runner.go:130] > 160G
	I1217 10:39:56.326243 2968376 fix.go:56] duration metric: took 1.225850623s for fixHost
	I1217 10:39:56.326261 2968376 start.go:83] releasing machines lock for "functional-232588", held for 1.22589425s
	I1217 10:39:56.326382 2968376 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232588
	I1217 10:39:56.351440 2968376 ssh_runner.go:195] Run: cat /version.json
	I1217 10:39:56.351467 2968376 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 10:39:56.351509 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:56.351532 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:56.377953 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:56.378286 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:56.472298 2968376 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1217 10:39:56.558575 2968376 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1217 10:39:56.561329 2968376 ssh_runner.go:195] Run: systemctl --version
	I1217 10:39:56.567378 2968376 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1217 10:39:56.567418 2968376 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1217 10:39:56.567866 2968376 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1217 10:39:56.572178 2968376 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1217 10:39:56.572242 2968376 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 10:39:56.572327 2968376 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 10:39:56.580077 2968376 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 10:39:56.580102 2968376 start.go:496] detecting cgroup driver to use...
	I1217 10:39:56.580153 2968376 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 10:39:56.580207 2968376 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 10:39:56.595473 2968376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 10:39:56.608619 2968376 docker.go:218] disabling cri-docker service (if available) ...
	I1217 10:39:56.608683 2968376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 10:39:56.624626 2968376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 10:39:56.639198 2968376 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 10:39:56.750544 2968376 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 10:39:56.881240 2968376 docker.go:234] disabling docker service ...
	I1217 10:39:56.881321 2968376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 10:39:56.896533 2968376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 10:39:56.909686 2968376 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 10:39:57.029179 2968376 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 10:39:57.147650 2968376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 10:39:57.160165 2968376 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 10:39:57.172821 2968376 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1217 10:39:57.174291 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 10:39:57.183184 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 10:39:57.192049 2968376 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 10:39:57.192173 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 10:39:57.201301 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 10:39:57.210430 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 10:39:57.219288 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 10:39:57.228051 2968376 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 10:39:57.235994 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 10:39:57.245724 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 10:39:57.254416 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 10:39:57.263062 2968376 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 10:39:57.269668 2968376 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1217 10:39:57.270584 2968376 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 10:39:57.278345 2968376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 10:39:57.386138 2968376 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 10:39:57.532674 2968376 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 10:39:57.532750 2968376 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 10:39:57.536608 2968376 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1217 10:39:57.536637 2968376 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1217 10:39:57.536644 2968376 command_runner.go:130] > Device: 0,72	Inode: 1613        Links: 1
	I1217 10:39:57.536652 2968376 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 10:39:57.536659 2968376 command_runner.go:130] > Access: 2025-12-17 10:39:57.469772687 +0000
	I1217 10:39:57.536664 2968376 command_runner.go:130] > Modify: 2025-12-17 10:39:57.469772687 +0000
	I1217 10:39:57.536669 2968376 command_runner.go:130] > Change: 2025-12-17 10:39:57.469772687 +0000
	I1217 10:39:57.536673 2968376 command_runner.go:130] >  Birth: -
	I1217 10:39:57.537168 2968376 start.go:564] Will wait 60s for crictl version
	I1217 10:39:57.537224 2968376 ssh_runner.go:195] Run: which crictl
	I1217 10:39:57.540827 2968376 command_runner.go:130] > /usr/local/bin/crictl
	I1217 10:39:57.541302 2968376 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 10:39:57.573267 2968376 command_runner.go:130] > Version:  0.1.0
	I1217 10:39:57.573463 2968376 command_runner.go:130] > RuntimeName:  containerd
	I1217 10:39:57.573480 2968376 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1217 10:39:57.573656 2968376 command_runner.go:130] > RuntimeApiVersion:  v1
	I1217 10:39:57.575908 2968376 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 10:39:57.575979 2968376 ssh_runner.go:195] Run: containerd --version
	I1217 10:39:57.593702 2968376 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1217 10:39:57.595828 2968376 ssh_runner.go:195] Run: containerd --version
	I1217 10:39:57.613025 2968376 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1217 10:39:57.620756 2968376 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1217 10:39:57.623690 2968376 cli_runner.go:164] Run: docker network inspect functional-232588 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 10:39:57.639560 2968376 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 10:39:57.643332 2968376 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1217 10:39:57.643691 2968376 kubeadm.go:884] updating cluster {Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 10:39:57.643808 2968376 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 10:39:57.643873 2968376 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 10:39:57.668138 2968376 command_runner.go:130] > {
	I1217 10:39:57.668155 2968376 command_runner.go:130] >   "images":  [
	I1217 10:39:57.668160 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668169 2968376 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 10:39:57.668174 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668179 2968376 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 10:39:57.668183 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668187 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668196 2968376 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1217 10:39:57.668199 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668204 2968376 command_runner.go:130] >       "size":  "40636774",
	I1217 10:39:57.668208 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668212 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668215 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668218 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668226 2968376 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 10:39:57.668231 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668236 2968376 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 10:39:57.668239 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668244 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668252 2968376 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 10:39:57.668260 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668264 2968376 command_runner.go:130] >       "size":  "8034419",
	I1217 10:39:57.668267 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668271 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668274 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668278 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668284 2968376 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 10:39:57.668288 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668293 2968376 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 10:39:57.668296 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668303 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668311 2968376 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1217 10:39:57.668314 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668319 2968376 command_runner.go:130] >       "size":  "21168808",
	I1217 10:39:57.668323 2968376 command_runner.go:130] >       "username":  "nonroot",
	I1217 10:39:57.668327 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668330 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668333 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668340 2968376 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1217 10:39:57.668344 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668348 2968376 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1217 10:39:57.668351 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668355 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668363 2968376 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1217 10:39:57.668366 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668370 2968376 command_runner.go:130] >       "size":  "21749640",
	I1217 10:39:57.668375 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.668379 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.668382 2968376 command_runner.go:130] >       },
	I1217 10:39:57.668386 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668390 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668393 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668396 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668405 2968376 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1217 10:39:57.668409 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668433 2968376 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1217 10:39:57.668438 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668442 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668450 2968376 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1217 10:39:57.668454 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668458 2968376 command_runner.go:130] >       "size":  "24692223",
	I1217 10:39:57.668461 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.668470 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.668478 2968376 command_runner.go:130] >       },
	I1217 10:39:57.668482 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668485 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668489 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668492 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668498 2968376 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1217 10:39:57.668503 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668509 2968376 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1217 10:39:57.668512 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668517 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668530 2968376 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1217 10:39:57.668537 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668542 2968376 command_runner.go:130] >       "size":  "20672157",
	I1217 10:39:57.668545 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.668549 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.668557 2968376 command_runner.go:130] >       },
	I1217 10:39:57.668562 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668576 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668580 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668583 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668589 2968376 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1217 10:39:57.668593 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668598 2968376 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1217 10:39:57.668608 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668614 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668622 2968376 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1217 10:39:57.668629 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668633 2968376 command_runner.go:130] >       "size":  "22432301",
	I1217 10:39:57.668637 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668641 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668645 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668648 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668655 2968376 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1217 10:39:57.668662 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668668 2968376 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1217 10:39:57.668672 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668678 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668689 2968376 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1217 10:39:57.668692 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668696 2968376 command_runner.go:130] >       "size":  "15405535",
	I1217 10:39:57.668702 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.668706 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.668719 2968376 command_runner.go:130] >       },
	I1217 10:39:57.668723 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668726 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668730 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668734 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668740 2968376 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 10:39:57.668748 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668753 2968376 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 10:39:57.668756 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668760 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668767 2968376 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1217 10:39:57.668773 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668777 2968376 command_runner.go:130] >       "size":  "267939",
	I1217 10:39:57.668781 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.668792 2968376 command_runner.go:130] >         "value":  "65535"
	I1217 10:39:57.668799 2968376 command_runner.go:130] >       },
	I1217 10:39:57.668803 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668807 2968376 command_runner.go:130] >       "pinned":  true
	I1217 10:39:57.668810 2968376 command_runner.go:130] >     }
	I1217 10:39:57.668813 2968376 command_runner.go:130] >   ]
	I1217 10:39:57.668816 2968376 command_runner.go:130] > }
	I1217 10:39:57.671107 2968376 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 10:39:57.671128 2968376 containerd.go:534] Images already preloaded, skipping extraction
	I1217 10:39:57.671185 2968376 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 10:39:57.697059 2968376 command_runner.go:130] > {
	I1217 10:39:57.697078 2968376 command_runner.go:130] >   "images":  [
	I1217 10:39:57.697083 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697093 2968376 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 10:39:57.697108 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697114 2968376 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 10:39:57.697118 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697122 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697131 2968376 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1217 10:39:57.697142 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697147 2968376 command_runner.go:130] >       "size":  "40636774",
	I1217 10:39:57.697155 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697159 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697162 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697166 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697175 2968376 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 10:39:57.697180 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697185 2968376 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 10:39:57.697188 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697192 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697202 2968376 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 10:39:57.697205 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697209 2968376 command_runner.go:130] >       "size":  "8034419",
	I1217 10:39:57.697213 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697216 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697219 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697222 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697229 2968376 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 10:39:57.697233 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697238 2968376 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 10:39:57.697242 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697249 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697256 2968376 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1217 10:39:57.697260 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697264 2968376 command_runner.go:130] >       "size":  "21168808",
	I1217 10:39:57.697268 2968376 command_runner.go:130] >       "username":  "nonroot",
	I1217 10:39:57.697272 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697275 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697278 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697284 2968376 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1217 10:39:57.697288 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697293 2968376 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1217 10:39:57.697296 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697300 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697310 2968376 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1217 10:39:57.697314 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697318 2968376 command_runner.go:130] >       "size":  "21749640",
	I1217 10:39:57.697323 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.697327 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.697330 2968376 command_runner.go:130] >       },
	I1217 10:39:57.697334 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697338 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697341 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697344 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697350 2968376 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1217 10:39:57.697354 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697359 2968376 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1217 10:39:57.697363 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697366 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697374 2968376 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1217 10:39:57.697377 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697381 2968376 command_runner.go:130] >       "size":  "24692223",
	I1217 10:39:57.697384 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.697393 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.697396 2968376 command_runner.go:130] >       },
	I1217 10:39:57.697400 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697403 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697406 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697409 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697416 2968376 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1217 10:39:57.697419 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697425 2968376 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1217 10:39:57.697428 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697432 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697440 2968376 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1217 10:39:57.697443 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697448 2968376 command_runner.go:130] >       "size":  "20672157",
	I1217 10:39:57.697460 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.697464 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.697467 2968376 command_runner.go:130] >       },
	I1217 10:39:57.697470 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697474 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697477 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697480 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697486 2968376 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1217 10:39:57.697490 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697495 2968376 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1217 10:39:57.697498 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697501 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697509 2968376 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1217 10:39:57.697512 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697515 2968376 command_runner.go:130] >       "size":  "22432301",
	I1217 10:39:57.697519 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697523 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697526 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697530 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697536 2968376 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1217 10:39:57.697540 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697545 2968376 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1217 10:39:57.697548 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697552 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697560 2968376 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1217 10:39:57.697563 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697567 2968376 command_runner.go:130] >       "size":  "15405535",
	I1217 10:39:57.697570 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.697574 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.697578 2968376 command_runner.go:130] >       },
	I1217 10:39:57.697581 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697585 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697588 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697594 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697600 2968376 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 10:39:57.697604 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697609 2968376 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 10:39:57.697612 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697615 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697622 2968376 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1217 10:39:57.697626 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697630 2968376 command_runner.go:130] >       "size":  "267939",
	I1217 10:39:57.697633 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.697637 2968376 command_runner.go:130] >         "value":  "65535"
	I1217 10:39:57.697641 2968376 command_runner.go:130] >       },
	I1217 10:39:57.697645 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697649 2968376 command_runner.go:130] >       "pinned":  true
	I1217 10:39:57.697652 2968376 command_runner.go:130] >     }
	I1217 10:39:57.697655 2968376 command_runner.go:130] >   ]
	I1217 10:39:57.697657 2968376 command_runner.go:130] > }
	I1217 10:39:57.699989 2968376 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 10:39:57.700059 2968376 cache_images.go:86] Images are preloaded, skipping loading
	I1217 10:39:57.700081 2968376 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1217 10:39:57.700225 2968376 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-232588 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 10:39:57.700311 2968376 ssh_runner.go:195] Run: sudo crictl info
	I1217 10:39:57.722782 2968376 command_runner.go:130] > {
	I1217 10:39:57.722800 2968376 command_runner.go:130] >   "cniconfig": {
	I1217 10:39:57.722805 2968376 command_runner.go:130] >     "Networks": [
	I1217 10:39:57.722813 2968376 command_runner.go:130] >       {
	I1217 10:39:57.722822 2968376 command_runner.go:130] >         "Config": {
	I1217 10:39:57.722827 2968376 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1217 10:39:57.722835 2968376 command_runner.go:130] >           "Name": "cni-loopback",
	I1217 10:39:57.722839 2968376 command_runner.go:130] >           "Plugins": [
	I1217 10:39:57.722843 2968376 command_runner.go:130] >             {
	I1217 10:39:57.722847 2968376 command_runner.go:130] >               "Network": {
	I1217 10:39:57.722851 2968376 command_runner.go:130] >                 "ipam": {},
	I1217 10:39:57.722856 2968376 command_runner.go:130] >                 "type": "loopback"
	I1217 10:39:57.722860 2968376 command_runner.go:130] >               },
	I1217 10:39:57.722866 2968376 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1217 10:39:57.722869 2968376 command_runner.go:130] >             }
	I1217 10:39:57.722873 2968376 command_runner.go:130] >           ],
	I1217 10:39:57.722882 2968376 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1217 10:39:57.722886 2968376 command_runner.go:130] >         },
	I1217 10:39:57.722893 2968376 command_runner.go:130] >         "IFName": "lo"
	I1217 10:39:57.722896 2968376 command_runner.go:130] >       }
	I1217 10:39:57.722899 2968376 command_runner.go:130] >     ],
	I1217 10:39:57.722908 2968376 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1217 10:39:57.722912 2968376 command_runner.go:130] >     "PluginDirs": [
	I1217 10:39:57.722915 2968376 command_runner.go:130] >       "/opt/cni/bin"
	I1217 10:39:57.722919 2968376 command_runner.go:130] >     ],
	I1217 10:39:57.722923 2968376 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1217 10:39:57.722926 2968376 command_runner.go:130] >     "Prefix": "eth"
	I1217 10:39:57.722930 2968376 command_runner.go:130] >   },
	I1217 10:39:57.722933 2968376 command_runner.go:130] >   "config": {
	I1217 10:39:57.722936 2968376 command_runner.go:130] >     "cdiSpecDirs": [
	I1217 10:39:57.722940 2968376 command_runner.go:130] >       "/etc/cdi",
	I1217 10:39:57.722944 2968376 command_runner.go:130] >       "/var/run/cdi"
	I1217 10:39:57.722948 2968376 command_runner.go:130] >     ],
	I1217 10:39:57.722952 2968376 command_runner.go:130] >     "cni": {
	I1217 10:39:57.722955 2968376 command_runner.go:130] >       "binDir": "",
	I1217 10:39:57.722959 2968376 command_runner.go:130] >       "binDirs": [
	I1217 10:39:57.722962 2968376 command_runner.go:130] >         "/opt/cni/bin"
	I1217 10:39:57.722965 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.722969 2968376 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1217 10:39:57.722973 2968376 command_runner.go:130] >       "confTemplate": "",
	I1217 10:39:57.722983 2968376 command_runner.go:130] >       "ipPref": "",
	I1217 10:39:57.722986 2968376 command_runner.go:130] >       "maxConfNum": 1,
	I1217 10:39:57.722991 2968376 command_runner.go:130] >       "setupSerially": false,
	I1217 10:39:57.722995 2968376 command_runner.go:130] >       "useInternalLoopback": false
	I1217 10:39:57.722998 2968376 command_runner.go:130] >     },
	I1217 10:39:57.723004 2968376 command_runner.go:130] >     "containerd": {
	I1217 10:39:57.723008 2968376 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1217 10:39:57.723013 2968376 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1217 10:39:57.723017 2968376 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1217 10:39:57.723021 2968376 command_runner.go:130] >       "runtimes": {
	I1217 10:39:57.723024 2968376 command_runner.go:130] >         "runc": {
	I1217 10:39:57.723029 2968376 command_runner.go:130] >           "ContainerAnnotations": null,
	I1217 10:39:57.723033 2968376 command_runner.go:130] >           "PodAnnotations": null,
	I1217 10:39:57.723038 2968376 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1217 10:39:57.723046 2968376 command_runner.go:130] >           "cgroupWritable": false,
	I1217 10:39:57.723050 2968376 command_runner.go:130] >           "cniConfDir": "",
	I1217 10:39:57.723054 2968376 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1217 10:39:57.723058 2968376 command_runner.go:130] >           "io_type": "",
	I1217 10:39:57.723061 2968376 command_runner.go:130] >           "options": {
	I1217 10:39:57.723065 2968376 command_runner.go:130] >             "BinaryName": "",
	I1217 10:39:57.723069 2968376 command_runner.go:130] >             "CriuImagePath": "",
	I1217 10:39:57.723074 2968376 command_runner.go:130] >             "CriuWorkPath": "",
	I1217 10:39:57.723077 2968376 command_runner.go:130] >             "IoGid": 0,
	I1217 10:39:57.723081 2968376 command_runner.go:130] >             "IoUid": 0,
	I1217 10:39:57.723085 2968376 command_runner.go:130] >             "NoNewKeyring": false,
	I1217 10:39:57.723089 2968376 command_runner.go:130] >             "Root": "",
	I1217 10:39:57.723092 2968376 command_runner.go:130] >             "ShimCgroup": "",
	I1217 10:39:57.723096 2968376 command_runner.go:130] >             "SystemdCgroup": false
	I1217 10:39:57.723100 2968376 command_runner.go:130] >           },
	I1217 10:39:57.723105 2968376 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1217 10:39:57.723111 2968376 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1217 10:39:57.723115 2968376 command_runner.go:130] >           "runtimePath": "",
	I1217 10:39:57.723120 2968376 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1217 10:39:57.723124 2968376 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1217 10:39:57.723128 2968376 command_runner.go:130] >           "snapshotter": ""
	I1217 10:39:57.723131 2968376 command_runner.go:130] >         }
	I1217 10:39:57.723134 2968376 command_runner.go:130] >       }
	I1217 10:39:57.723136 2968376 command_runner.go:130] >     },
	I1217 10:39:57.723146 2968376 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1217 10:39:57.723151 2968376 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1217 10:39:57.723156 2968376 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1217 10:39:57.723161 2968376 command_runner.go:130] >     "disableApparmor": false,
	I1217 10:39:57.723166 2968376 command_runner.go:130] >     "disableHugetlbController": true,
	I1217 10:39:57.723170 2968376 command_runner.go:130] >     "disableProcMount": false,
	I1217 10:39:57.723174 2968376 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1217 10:39:57.723177 2968376 command_runner.go:130] >     "enableCDI": true,
	I1217 10:39:57.723181 2968376 command_runner.go:130] >     "enableSelinux": false,
	I1217 10:39:57.723188 2968376 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1217 10:39:57.723195 2968376 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1217 10:39:57.723200 2968376 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1217 10:39:57.723204 2968376 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1217 10:39:57.723208 2968376 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1217 10:39:57.723212 2968376 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1217 10:39:57.723216 2968376 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1217 10:39:57.723222 2968376 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1217 10:39:57.723226 2968376 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1217 10:39:57.723231 2968376 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1217 10:39:57.723236 2968376 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1217 10:39:57.723241 2968376 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1217 10:39:57.723243 2968376 command_runner.go:130] >   },
	I1217 10:39:57.723247 2968376 command_runner.go:130] >   "features": {
	I1217 10:39:57.723251 2968376 command_runner.go:130] >     "supplemental_groups_policy": true
	I1217 10:39:57.723254 2968376 command_runner.go:130] >   },
	I1217 10:39:57.723257 2968376 command_runner.go:130] >   "golang": "go1.24.9",
	I1217 10:39:57.723267 2968376 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1217 10:39:57.723277 2968376 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1217 10:39:57.723281 2968376 command_runner.go:130] >   "runtimeHandlers": [
	I1217 10:39:57.723283 2968376 command_runner.go:130] >     {
	I1217 10:39:57.723287 2968376 command_runner.go:130] >       "features": {
	I1217 10:39:57.723291 2968376 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1217 10:39:57.723297 2968376 command_runner.go:130] >         "user_namespaces": true
	I1217 10:39:57.723299 2968376 command_runner.go:130] >       }
	I1217 10:39:57.723302 2968376 command_runner.go:130] >     },
	I1217 10:39:57.723305 2968376 command_runner.go:130] >     {
	I1217 10:39:57.723308 2968376 command_runner.go:130] >       "features": {
	I1217 10:39:57.723315 2968376 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1217 10:39:57.723319 2968376 command_runner.go:130] >         "user_namespaces": true
	I1217 10:39:57.723322 2968376 command_runner.go:130] >       },
	I1217 10:39:57.723326 2968376 command_runner.go:130] >       "name": "runc"
	I1217 10:39:57.723328 2968376 command_runner.go:130] >     }
	I1217 10:39:57.723335 2968376 command_runner.go:130] >   ],
	I1217 10:39:57.723338 2968376 command_runner.go:130] >   "status": {
	I1217 10:39:57.723342 2968376 command_runner.go:130] >     "conditions": [
	I1217 10:39:57.723345 2968376 command_runner.go:130] >       {
	I1217 10:39:57.723348 2968376 command_runner.go:130] >         "message": "",
	I1217 10:39:57.723352 2968376 command_runner.go:130] >         "reason": "",
	I1217 10:39:57.723356 2968376 command_runner.go:130] >         "status": true,
	I1217 10:39:57.723361 2968376 command_runner.go:130] >         "type": "RuntimeReady"
	I1217 10:39:57.723364 2968376 command_runner.go:130] >       },
	I1217 10:39:57.723367 2968376 command_runner.go:130] >       {
	I1217 10:39:57.723373 2968376 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1217 10:39:57.723378 2968376 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1217 10:39:57.723382 2968376 command_runner.go:130] >         "status": false,
	I1217 10:39:57.723386 2968376 command_runner.go:130] >         "type": "NetworkReady"
	I1217 10:39:57.723389 2968376 command_runner.go:130] >       },
	I1217 10:39:57.723391 2968376 command_runner.go:130] >       {
	I1217 10:39:57.723414 2968376 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1217 10:39:57.723421 2968376 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1217 10:39:57.723426 2968376 command_runner.go:130] >         "status": false,
	I1217 10:39:57.723432 2968376 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1217 10:39:57.723434 2968376 command_runner.go:130] >       }
	I1217 10:39:57.723437 2968376 command_runner.go:130] >     ]
	I1217 10:39:57.723440 2968376 command_runner.go:130] >   }
	I1217 10:39:57.723442 2968376 command_runner.go:130] > }
	I1217 10:39:57.726093 2968376 cni.go:84] Creating CNI manager for ""
	I1217 10:39:57.726119 2968376 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 10:39:57.726139 2968376 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 10:39:57.726166 2968376 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-232588 NodeName:functional-232588 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 10:39:57.726283 2968376 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-232588"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 10:39:57.726359 2968376 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 10:39:57.733320 2968376 command_runner.go:130] > kubeadm
	I1217 10:39:57.733342 2968376 command_runner.go:130] > kubectl
	I1217 10:39:57.733347 2968376 command_runner.go:130] > kubelet
	I1217 10:39:57.734253 2968376 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 10:39:57.734351 2968376 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 10:39:57.741900 2968376 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1217 10:39:57.754718 2968376 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 10:39:57.767131 2968376 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1217 10:39:57.780328 2968376 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 10:39:57.783968 2968376 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1217 10:39:57.784263 2968376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 10:39:57.891500 2968376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 10:39:58.252332 2968376 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588 for IP: 192.168.49.2
	I1217 10:39:58.252409 2968376 certs.go:195] generating shared ca certs ...
	I1217 10:39:58.252461 2968376 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:39:58.252670 2968376 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 10:39:58.252752 2968376 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 10:39:58.252788 2968376 certs.go:257] generating profile certs ...
	I1217 10:39:58.252943 2968376 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.key
	I1217 10:39:58.253053 2968376 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key.a39919a0
	I1217 10:39:58.253133 2968376 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key
	I1217 10:39:58.253172 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 10:39:58.253214 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 10:39:58.253260 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 10:39:58.253294 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 10:39:58.253341 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 10:39:58.253377 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 10:39:58.253421 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 10:39:58.253456 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 10:39:58.253577 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 10:39:58.253658 2968376 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 10:39:58.253688 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 10:39:58.253756 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 10:39:58.253819 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 10:39:58.253883 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 10:39:58.253975 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 10:39:58.254044 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.254093 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem -> /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.254126 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.254782 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 10:39:58.276977 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 10:39:58.300224 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 10:39:58.319429 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 10:39:58.338203 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 10:39:58.355898 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 10:39:58.373473 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 10:39:58.391528 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 10:39:58.408858 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 10:39:58.426819 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 10:39:58.444926 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 10:39:58.462979 2968376 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 10:39:58.476114 2968376 ssh_runner.go:195] Run: openssl version
	I1217 10:39:58.483093 2968376 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 10:39:58.483240 2968376 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.490661 2968376 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 10:39:58.498193 2968376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.502204 2968376 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.502289 2968376 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.502352 2968376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.543361 2968376 command_runner.go:130] > b5213941
	I1217 10:39:58.543894 2968376 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 10:39:58.551548 2968376 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.559110 2968376 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 10:39:58.567064 2968376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.570982 2968376 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.571071 2968376 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.571149 2968376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.611772 2968376 command_runner.go:130] > 51391683
	I1217 10:39:58.612217 2968376 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 10:39:58.619901 2968376 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.627496 2968376 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 10:39:58.635170 2968376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.639161 2968376 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.639286 2968376 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.639343 2968376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.679963 2968376 command_runner.go:130] > 3ec20f2e
	I1217 10:39:58.680491 2968376 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 10:39:58.687873 2968376 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 10:39:58.691452 2968376 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 10:39:58.691483 2968376 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1217 10:39:58.691491 2968376 command_runner.go:130] > Device: 259,1	Inode: 3648630     Links: 1
	I1217 10:39:58.691498 2968376 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 10:39:58.691503 2968376 command_runner.go:130] > Access: 2025-12-17 10:35:51.067485305 +0000
	I1217 10:39:58.691508 2968376 command_runner.go:130] > Modify: 2025-12-17 10:31:46.445154139 +0000
	I1217 10:39:58.691513 2968376 command_runner.go:130] > Change: 2025-12-17 10:31:46.445154139 +0000
	I1217 10:39:58.691519 2968376 command_runner.go:130] >  Birth: 2025-12-17 10:31:46.445154139 +0000
	I1217 10:39:58.691792 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 10:39:58.732576 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.733078 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 10:39:58.773416 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.773947 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 10:39:58.814511 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.815058 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 10:39:58.855809 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.856437 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 10:39:58.897493 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.897637 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 10:39:58.937941 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.938362 2968376 kubeadm.go:401] StartCluster: {Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:39:58.938478 2968376 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 10:39:58.938558 2968376 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 10:39:58.967095 2968376 cri.go:89] found id: ""
	I1217 10:39:58.967172 2968376 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 10:39:58.974207 2968376 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1217 10:39:58.974232 2968376 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1217 10:39:58.974239 2968376 command_runner.go:130] > /var/lib/minikube/etcd:
	I1217 10:39:58.975124 2968376 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 10:39:58.975142 2968376 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 10:39:58.975194 2968376 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 10:39:58.982722 2968376 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 10:39:58.983159 2968376 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-232588" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:39:58.983280 2968376 kubeconfig.go:62] /home/jenkins/minikube-integration/22182-2922712/kubeconfig needs updating (will repair): [kubeconfig missing "functional-232588" cluster setting kubeconfig missing "functional-232588" context setting]
	I1217 10:39:58.983551 2968376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:39:58.984002 2968376 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:39:58.984156 2968376 kapi.go:59] client config for functional-232588: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt", KeyFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.key", CAFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 10:39:58.984706 2968376 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 10:39:58.984730 2968376 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 10:39:58.984737 2968376 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 10:39:58.984745 2968376 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 10:39:58.984756 2968376 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 10:39:58.984794 2968376 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 10:39:58.985054 2968376 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 10:39:58.992764 2968376 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 10:39:58.992810 2968376 kubeadm.go:602] duration metric: took 17.660629ms to restartPrimaryControlPlane
	I1217 10:39:58.992820 2968376 kubeadm.go:403] duration metric: took 54.467316ms to StartCluster
	I1217 10:39:58.992834 2968376 settings.go:142] acquiring lock: {Name:mkbe14d68dd5ff3fa1749157c50305d115f5a24d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:39:58.992909 2968376 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:39:58.993526 2968376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:39:58.993746 2968376 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 10:39:58.994170 2968376 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 10:39:58.994219 2968376 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 10:39:58.994288 2968376 addons.go:70] Setting storage-provisioner=true in profile "functional-232588"
	I1217 10:39:58.994301 2968376 addons.go:239] Setting addon storage-provisioner=true in "functional-232588"
	I1217 10:39:58.994329 2968376 host.go:66] Checking if "functional-232588" exists ...
	I1217 10:39:58.994354 2968376 addons.go:70] Setting default-storageclass=true in profile "functional-232588"
	I1217 10:39:58.994416 2968376 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-232588"
	I1217 10:39:58.994775 2968376 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:39:58.994809 2968376 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:39:59.000060 2968376 out.go:179] * Verifying Kubernetes components...
	I1217 10:39:59.002988 2968376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 10:39:59.030107 2968376 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:39:59.030278 2968376 kapi.go:59] client config for functional-232588: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt", KeyFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.key", CAFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 10:39:59.030548 2968376 addons.go:239] Setting addon default-storageclass=true in "functional-232588"
	I1217 10:39:59.030583 2968376 host.go:66] Checking if "functional-232588" exists ...
	I1217 10:39:59.030999 2968376 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:39:59.046619 2968376 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 10:39:59.049547 2968376 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:39:59.049578 2968376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 10:39:59.049652 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:59.071122 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:59.078111 2968376 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 10:39:59.078138 2968376 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 10:39:59.078204 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:59.106268 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:59.210035 2968376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 10:39:59.247804 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:39:59.250104 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:00.029975 2968376 node_ready.go:35] waiting up to 6m0s for node "functional-232588" to be "Ready" ...
	I1217 10:40:00.030121 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:00.030183 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:00.030443 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.030485 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.030522 2968376 retry.go:31] will retry after 293.620925ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.030561 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.030575 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.030582 2968376 retry.go:31] will retry after 156.365506ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.030650 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:00.188354 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:00.324847 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:00.436532 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.436662 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.436836 2968376 retry.go:31] will retry after 279.814099ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.516954 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.518501 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.518555 2968376 retry.go:31] will retry after 262.10287ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.531577 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:00.531724 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:00.533353 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 10:40:00.717812 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:00.781511 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:00.801403 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.801643 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.801671 2968376 retry.go:31] will retry after 799.844048ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.868602 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.868642 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.868698 2968376 retry.go:31] will retry after 554.70169ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:01.031171 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:01.031268 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:01.031636 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:01.424206 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:01.486829 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:01.486884 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:01.486903 2968376 retry.go:31] will retry after 534.910165ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:01.531036 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:01.531190 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:01.531514 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:01.601938 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:01.666361 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:01.666415 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:01.666435 2968376 retry.go:31] will retry after 494.63938ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:02.022963 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:02.030812 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:02.030945 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:02.031372 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:02.031439 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:02.093352 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:02.093469 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:02.093495 2968376 retry.go:31] will retry after 1.147395482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:02.161756 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:02.224785 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:02.224835 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:02.224873 2968376 retry.go:31] will retry after 722.380129ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:02.530243 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:02.530335 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:02.530682 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:02.948277 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:03.019220 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:03.023774 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:03.023820 2968376 retry.go:31] will retry after 1.527910453s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:03.031105 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:03.031182 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:03.031525 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:03.241898 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:03.304153 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:03.304205 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:03.304227 2968376 retry.go:31] will retry after 2.808262652s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:03.530353 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:03.530425 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:03.530767 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:04.030262 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:04.030340 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:04.030662 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:04.530190 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:04.530267 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:04.530613 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:04.530682 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:04.552783 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:04.614277 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:04.618634 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:04.618671 2968376 retry.go:31] will retry after 1.686088172s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:05.031243 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:05.031319 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:05.031611 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:05.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:05.530314 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:05.530636 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:06.030216 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:06.030295 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:06.030584 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:06.113005 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:06.174987 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:06.175028 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:06.175048 2968376 retry.go:31] will retry after 2.620064864s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:06.305352 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:06.366722 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:06.366771 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:06.366790 2968376 retry.go:31] will retry after 6.20410258s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:06.531098 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:06.531170 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:06.531510 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:06.531566 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:07.030285 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:07.030361 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:07.030703 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:07.530195 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:07.530269 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:07.530540 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:08.030245 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:08.030326 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:08.030668 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:08.530335 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:08.530413 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:08.530732 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:08.796304 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:08.853426 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:08.857034 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:08.857067 2968376 retry.go:31] will retry after 3.174722269s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:09.030586 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:09.030666 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:09.031008 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:09.031064 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:09.530804 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:09.530879 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:09.531204 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:10.031140 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:10.031218 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:10.031521 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:10.530229 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:10.530312 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:10.530588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:11.030272 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:11.030347 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:11.030674 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:11.530355 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:11.530450 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:11.530745 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:11.530788 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:12.030183 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:12.030259 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:12.030568 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:12.032754 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:12.104534 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:12.104594 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:12.104617 2968376 retry.go:31] will retry after 7.427014064s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:12.531116 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:12.531194 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:12.531510 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:12.571824 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:12.627783 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:12.631439 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:12.631473 2968376 retry.go:31] will retry after 5.673499761s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:13.031007 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:13.031079 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:13.031447 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:13.530133 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:13.530207 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:13.530473 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:14.030881 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:14.030963 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:14.031294 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:14.031348 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:14.531063 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:14.531139 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:14.531511 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:15.030415 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:15.030505 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:15.030865 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:15.530246 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:15.530327 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:15.530615 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:16.030257 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:16.030335 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:16.030683 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:16.530343 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:16.530412 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:16.530735 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:16.530792 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:17.030348 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:17.030438 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:17.030746 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:17.530427 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:17.530508 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:17.530854 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:18.031138 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:18.031239 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:18.031524 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:18.306153 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:18.363523 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:18.367149 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:18.367184 2968376 retry.go:31] will retry after 11.676089788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:18.530483 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:18.530628 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:18.530998 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:18.531054 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:19.031060 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:19.031138 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:19.031447 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:19.530144 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:19.530217 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:19.530501 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:19.532780 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:19.596086 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:19.596134 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:19.596153 2968376 retry.go:31] will retry after 6.09625298s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:20.031102 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:20.031251 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:20.031747 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:20.530743 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:20.530896 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:20.531474 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:20.531549 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:21.030954 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:21.031034 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:21.031324 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:21.531097 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:21.531170 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:21.531522 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:22.030145 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:22.030232 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:22.030617 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:22.530952 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:22.531023 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:22.531286 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:23.031049 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:23.031121 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:23.031488 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:23.031552 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:23.531151 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:23.531233 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:23.531594 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:24.030205 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:24.030271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:24.030618 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:24.530297 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:24.530374 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:24.530704 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:25.030556 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:25.030634 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:25.031013 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:25.530898 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:25.530990 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:25.531308 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:25.531351 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:25.692701 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:25.761074 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:25.761116 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:25.761134 2968376 retry.go:31] will retry after 8.308022173s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:26.030656 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:26.030736 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:26.031050 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:26.530816 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:26.530887 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:26.531227 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:27.030620 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:27.030689 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:27.030975 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:27.530810 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:27.530882 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:27.531225 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:28.031037 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:28.031121 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:28.031512 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:28.031588 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:28.530251 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:28.530319 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:28.530583 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:29.030525 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:29.030614 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:29.031053 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:29.530775 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:29.530846 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:29.531189 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:30.032544 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:30.032629 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:30.032970 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:30.033031 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:30.044190 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:30.141158 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:30.141207 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:30.141228 2968376 retry.go:31] will retry after 21.251088353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:30.530770 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:30.530848 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:30.531184 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:31.031023 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:31.031097 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:31.031429 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:31.530162 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:31.530338 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:31.530687 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:32.030318 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:32.030410 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:32.030863 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:32.530571 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:32.530648 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:32.531098 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:32.531174 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:33.030920 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:33.031010 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:33.031359 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:33.531147 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:33.531219 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:33.531570 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:34.030334 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:34.030418 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:34.030775 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:34.070045 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:34.128651 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:34.132259 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:34.132293 2968376 retry.go:31] will retry after 23.004999937s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:34.530392 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:34.530466 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:34.530735 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:35.030855 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:35.030938 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:35.031252 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:35.031308 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:35.530763 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:35.530834 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:35.531181 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:36.030980 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:36.031106 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:36.031458 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:36.530826 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:36.530905 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:36.531257 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:37.031180 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:37.031261 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:37.031662 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:37.031754 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:37.530206 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:37.530276 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:37.530649 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:38.030423 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:38.030503 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:38.030854 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:38.530587 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:38.530659 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:38.531005 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:39.030844 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:39.030924 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:39.031203 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:39.531010 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:39.531096 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:39.531446 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:39.531521 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:40.031145 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:40.031231 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:40.031658 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:40.530343 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:40.530420 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:40.530707 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:41.030992 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:41.031064 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:41.031409 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:41.531176 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:41.531252 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:41.531592 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:41.531649 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:42.030335 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:42.030418 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:42.030713 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:42.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:42.530293 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:42.530634 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:43.030224 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:43.030309 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:43.030694 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:43.530392 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:43.530468 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:43.530795 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:44.030259 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:44.030339 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:44.030666 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:44.030720 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:44.530388 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:44.530467 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:44.530803 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:45.030897 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:45.032736 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:45.034090 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 10:40:45.530857 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:45.530936 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:45.531262 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:46.031009 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:46.031081 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:46.031343 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:46.031380 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:46.531073 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:46.531152 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:46.531521 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:47.030170 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:47.030255 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:47.030602 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:47.530303 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:47.530374 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:47.530644 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:48.030323 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:48.030406 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:48.030744 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:48.530500 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:48.530605 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:48.530966 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:48.531023 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:49.030795 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:49.030871 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:49.031172 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:49.530860 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:49.530935 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:49.531267 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:50.031129 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:50.031208 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:50.031548 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:50.530224 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:50.530303 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:50.530574 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:51.030327 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:51.030401 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:51.030749 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:51.030806 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:51.393321 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:51.454332 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:51.458316 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:51.458350 2968376 retry.go:31] will retry after 15.302727777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:51.530571 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:51.530643 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:51.530966 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:52.030247 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:52.030332 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:52.030623 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:52.530219 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:52.530312 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:52.530691 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:53.030289 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:53.030364 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:53.030698 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:53.530380 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:53.530457 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:53.530780 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:53.530833 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:54.030549 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:54.030652 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:54.030947 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:54.530639 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:54.530716 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:54.531043 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:55.030934 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:55.031013 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:55.031455 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:55.531099 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:55.531193 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:55.531510 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:55.531578 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:56.030320 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:56.030398 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:56.030700 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:56.530201 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:56.530273 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:56.530535 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:57.030303 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:57.030373 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:57.030719 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:57.138000 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:57.193212 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:57.197444 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:57.197478 2968376 retry.go:31] will retry after 20.170499035s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:57.530886 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:57.530963 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:57.531316 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:58.031030 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:58.031101 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:58.031459 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:58.031521 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:58.530185 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:58.530279 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:58.530591 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:59.030603 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:59.030673 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:59.031011 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:59.530181 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:59.530254 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:59.530556 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:00.031130 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:00.031217 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:00.031532 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:00.031582 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:00.530232 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:00.530306 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:00.530652 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:01.030118 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:01.030193 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:01.030459 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:01.530122 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:01.530201 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:01.530574 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:02.030319 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:02.030407 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:02.030755 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:02.530195 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:02.530276 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:02.530558 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:02.530607 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:03.030232 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:03.030305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:03.030654 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:03.530352 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:03.530433 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:03.530775 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:04.030460 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:04.030552 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:04.030847 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:04.530572 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:04.530659 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:04.530971 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:04.531027 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:05.031015 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:05.031091 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:05.031381 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:05.531137 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:05.531210 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:05.531480 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:06.030204 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:06.030287 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:06.030672 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:06.530230 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:06.530312 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:06.530661 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:06.762229 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:41:06.820073 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:06.820109 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:41:06.820128 2968376 retry.go:31] will retry after 35.040877283s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:41:07.030604 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:07.030693 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:07.030967 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:07.031017 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:07.530709 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:07.530791 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:07.531216 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:08.030859 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:08.030956 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:08.031280 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:08.531028 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:08.531110 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:08.531372 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:09.030137 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:09.030210 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:09.030518 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:09.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:09.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:09.530639 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:09.530700 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:10.030448 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:10.030530 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:10.030820 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:10.530483 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:10.530577 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:10.530870 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:11.030249 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:11.030346 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:11.030673 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:11.530364 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:11.530439 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:11.530704 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:11.530760 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:12.030256 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:12.030329 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:12.030660 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:12.530205 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:12.530277 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:12.530608 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:13.030312 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:13.030400 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:13.030672 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:13.530341 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:13.530415 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:13.530818 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:13.530880 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:14.030265 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:14.030338 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:14.030678 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:14.530384 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:14.530453 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:14.530771 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:15.030787 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:15.030877 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:15.031291 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:15.531114 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:15.531196 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:15.531528 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:15.531590 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:16.030220 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:16.030296 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:16.030573 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:16.530300 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:16.530383 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:16.530739 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:17.030458 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:17.030539 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:17.030882 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:17.368346 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:41:17.428304 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:17.431873 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:41:17.431904 2968376 retry.go:31] will retry after 38.363968078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:41:17.531154 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:17.531231 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:17.531502 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:18.030234 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:18.030352 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:18.030774 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:18.030859 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:18.530515 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:18.530607 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:18.530942 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:19.030903 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:19.030980 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:19.031301 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:19.530780 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:19.530855 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:19.531233 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:20.031004 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:20.031081 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:20.031456 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:20.031515 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:20.530158 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:20.530242 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:20.530554 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:21.030247 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:21.030339 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:21.030668 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:21.530378 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:21.530474 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:21.530782 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:22.030443 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:22.030541 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:22.030864 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:22.530263 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:22.530337 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:22.530672 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:22.530725 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:23.030389 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:23.030466 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:23.030819 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:23.530513 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:23.530591 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:23.530877 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:24.030399 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:24.030471 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:24.030823 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:24.530238 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:24.530307 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:24.530641 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:25.030797 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:25.030872 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:25.031158 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:25.031215 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:25.530943 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:25.531023 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:25.531343 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:26.031163 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:26.031243 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:26.031563 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:26.530221 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:26.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:26.530613 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:27.030270 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:27.030340 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:27.030646 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:27.530262 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:27.530344 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:27.530672 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:27.530735 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:28.030374 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:28.030450 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:28.030789 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:28.530216 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:28.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:28.530634 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:29.030406 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:29.030498 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:29.030839 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:29.530201 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:29.530270 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:29.530587 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:30.030598 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:30.030723 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:30.031102 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:30.031168 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:30.530947 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:30.531019 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:30.531339 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:31.031114 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:31.031177 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:31.031431 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:31.531219 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:31.531303 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:31.531630 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:32.030207 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:32.030291 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:32.030641 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:32.530182 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:32.530258 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:32.530539 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:32.530590 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:33.030266 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:33.030347 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:33.030749 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:33.530421 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:33.530501 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:33.530853 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:34.030212 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:34.030287 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:34.030655 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:34.530281 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:34.530361 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:34.530710 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:34.530814 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:35.030611 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:35.030685 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:35.031034 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:35.530333 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:35.530402 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:35.530717 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:36.030300 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:36.030388 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:36.030750 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:36.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:36.530286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:36.530608 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:37.030333 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:37.030420 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:37.030757 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:37.030808 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:37.530515 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:37.530586 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:37.530947 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:38.030773 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:38.030848 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:38.031196 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:38.530440 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:38.530514 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:38.530796 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:39.030742 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:39.030829 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:39.031155 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:39.031225 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:39.530944 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:39.531015 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:39.531346 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:40.031118 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:40.031205 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:40.031497 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:40.530158 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:40.530248 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:40.530609 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:41.030336 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:41.030425 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:41.030757 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:41.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:41.530274 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:41.530533 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:41.530571 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:41.862178 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:41:41.923706 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:41.923759 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:41.923872 2968376 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 10:41:42.031033 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:42.031113 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:42.031454 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:42.530179 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:42.530261 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:42.530593 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:43.030183 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:43.030252 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:43.030559 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:43.530235 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:43.530318 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:43.530640 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:43.530702 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:44.030269 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:44.030350 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:44.030675 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:44.530279 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:44.530349 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:44.530638 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:45.030563 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:45.030653 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:45.031039 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:45.530936 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:45.531014 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:45.531379 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:45.531437 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:46.030694 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:46.030768 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:46.031031 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:46.530498 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:46.530586 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:46.530955 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:47.030756 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:47.030830 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:47.031181 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:47.530955 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:47.531021 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:47.531344 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:48.031149 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:48.031228 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:48.031596 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:48.031656 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:48.530229 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:48.530311 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:48.530650 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:49.030360 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:49.030435 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:49.030716 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:49.530373 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:49.530448 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:49.530795 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:50.030856 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:50.030945 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:50.031309 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:50.531043 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:50.531111 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:50.531380 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:50.531421 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:51.030173 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:51.030254 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:51.030586 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:51.530279 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:51.530362 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:51.530687 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:52.030399 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:52.030478 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:52.030876 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:52.530578 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:52.530651 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:52.531025 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:53.030854 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:53.030938 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:53.031290 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:53.031364 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:53.531095 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:53.531182 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:53.531545 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:54.030257 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:54.030339 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:54.030711 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:54.530208 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:54.530286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:54.534934 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 10:41:55.030911 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:55.031006 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:55.031280 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:55.530658 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:55.530757 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:55.531092 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:55.531146 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:55.796501 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:41:55.858122 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:55.858175 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:55.858259 2968376 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 10:41:55.863014 2968376 out.go:179] * Enabled addons: 
	I1217 10:41:55.865747 2968376 addons.go:530] duration metric: took 1m56.871522842s for enable addons: enabled=[]
	I1217 10:41:56.030483 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:56.030561 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:56.030907 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:56.530592 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:56.530668 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:56.530973 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:57.030256 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:57.030336 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:57.030717 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:57.530234 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:57.530308 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:57.530641 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:58.033611 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:58.033711 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:58.033996 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:58.034053 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:58.530276 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:58.530381 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:58.530759 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:59.030773 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:59.030845 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:59.031207 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:59.531008 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:59.531115 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:59.531404 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:00.030325 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:00.030471 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:00.030856 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:00.530785 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:00.530901 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:00.531226 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:00.531288 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:01.030976 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:01.031043 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:01.031299 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:01.531109 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:01.531187 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:01.531522 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:02.030231 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:02.030334 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:02.030695 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:02.530421 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:02.530490 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:02.530829 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:03.030527 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:03.030623 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:03.030985 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:03.031044 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:03.530805 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:03.530890 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:03.531241 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:04.030644 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:04.030719 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:04.031014 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:04.530743 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:04.530821 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:04.531126 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:05.030982 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:05.031061 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:05.031449 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:05.031509 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:05.530161 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:05.530231 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:05.530503 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:06.030268 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:06.030373 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:06.030811 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:06.530502 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:06.530577 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:06.530933 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:07.030646 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:07.030722 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:07.031021 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:07.530377 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:07.530455 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:07.530792 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:07.530847 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:08.030509 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:08.030589 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:08.030943 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:08.530625 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:08.530698 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:08.530961 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:09.030865 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:09.030938 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:09.031271 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:09.531064 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:09.531145 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:09.531546 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:09.531604 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:10.030184 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:10.030265 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:10.030604 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:10.530301 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:10.530388 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:10.530737 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:11.030319 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:11.030395 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:11.030731 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:11.530208 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:11.530286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:11.530559 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:12.030254 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:12.030339 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:12.030671 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:12.030728 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:12.530216 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:12.530298 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:12.530650 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:13.030199 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:13.030289 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:13.030609 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:13.530174 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:13.530251 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:13.530593 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:14.030325 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:14.030403 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:14.030742 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:14.030815 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:14.530197 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:14.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:14.530595 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:15.030677 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:15.030769 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:15.031176 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:15.530953 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:15.531037 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:15.531372 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:16.030632 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:16.030705 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:16.031041 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:16.031095 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:16.530824 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:16.530899 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:16.531227 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:17.031078 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:17.031158 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:17.031507 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:17.530212 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:17.530279 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:17.530603 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:18.030263 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:18.030383 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:18.030909 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:18.530209 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:18.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:18.530632 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:18.530733 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:19.030297 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:19.030368 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:19.030628 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:19.530369 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:19.530456 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:19.530831 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:20.030743 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:20.030860 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:20.031293 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:20.531060 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:20.531143 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:20.531416 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:20.531464 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:21.030175 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:21.030250 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:21.030599 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:21.530297 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:21.530372 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:21.530710 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:22.030207 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:22.030288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:22.030595 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:22.530236 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:22.530323 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:22.530665 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:23.030258 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:23.030337 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:23.030699 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:23.030756 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:23.530408 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:23.530481 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:23.530771 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:24.030482 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:24.030556 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:24.030886 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:24.530220 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:24.530300 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:24.530624 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:25.030607 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:25.030684 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:25.030955 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:25.030997 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:25.530792 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:25.530868 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:25.531224 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:26.031041 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:26.031118 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:26.031467 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:26.530812 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:26.530894 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:26.531163 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:27.030944 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:27.031023 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:27.031350 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:27.031410 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:27.531121 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:27.531199 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:27.531551 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:28.030233 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:28.030315 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:28.030645 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:28.530224 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:28.530306 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:28.530690 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:29.030400 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:29.030474 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:29.030789 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:29.530188 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:29.530261 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:29.530523 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:29.530575 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:30.030527 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:30.030608 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:30.030914 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:30.530157 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:30.530235 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:30.530505 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:31.030212 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:31.030285 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:31.030567 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:31.530218 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:31.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:31.530647 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:31.530702 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:32.030406 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:32.030487 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:32.030812 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:32.530487 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:32.530568 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:32.530890 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:33.030273 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:33.030372 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:33.030696 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:33.530243 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:33.530324 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:33.530662 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:34.030960 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:34.031034 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:34.031331 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:34.031398 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:34.531153 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:34.531229 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:34.531528 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:35.031148 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:35.031227 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:35.031548 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:35.530192 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:35.530297 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:35.530620 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:36.030260 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:36.030343 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:36.030695 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:36.530255 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:36.530338 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:36.530702 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:36.530761 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:37.030424 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:37.030507 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:37.030895 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:37.530587 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:37.530660 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:37.530982 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:38.030793 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:38.030877 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:38.031209 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:38.530652 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:38.530746 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:38.531014 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:38.531064 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:39.030920 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:39.030999 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:39.031358 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:39.530863 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:39.530944 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:39.531269 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:40.033571 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:40.033646 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:40.033999 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:40.530772 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:40.530895 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:40.531207 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:40.531255 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:41.030902 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:41.030976 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:41.031290 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:41.530836 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:41.530913 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:41.531177 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:42.031028 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:42.031104 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:42.031506 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:42.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:42.530290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:42.530602 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:43.030257 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:43.030326 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:43.030592 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:43.030635 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:43.530202 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:43.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:43.530608 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:44.030263 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:44.030345 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:44.030692 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:44.530382 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:44.530453 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:44.530758 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:45.030634 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:45.030711 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:45.031045 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:45.031092 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:45.530980 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:45.531053 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:45.531405 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:46.030984 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:46.031075 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:46.031347 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:46.531101 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:46.531172 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:46.531490 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:47.030218 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:47.030298 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:47.030674 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:47.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:47.530250 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:47.530516 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:47.530556 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:48.030265 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:48.030342 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:48.030699 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:48.530227 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:48.530301 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:48.530655 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:49.030362 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:49.030433 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:49.030712 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:49.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:49.530274 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:49.530611 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:49.530667 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:50.030450 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:50.030536 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:50.030902 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:50.530566 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:50.530643 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:50.530924 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:51.030624 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:51.030697 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:51.031040 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:51.530799 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:51.530874 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:51.531195 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:51.531260 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:52.030967 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:52.031041 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:52.031382 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:52.530130 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:52.530238 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:52.530576 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:53.030280 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:53.030358 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:53.030697 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:53.530178 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:53.530255 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:53.530583 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:54.030277 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:54.030377 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:54.030696 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:54.030750 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:54.530411 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:54.530489 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:54.530806 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:55.030697 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:55.030769 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:55.031047 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:55.530468 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:55.530547 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:55.530914 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:56.030257 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:56.030332 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:56.030675 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:56.530365 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:56.530439 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:56.530709 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:56.530760 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:57.030475 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:57.030547 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:57.030868 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:57.530563 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:57.530633 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:57.530984 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:58.030672 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:58.030747 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:58.031048 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:58.530818 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:58.530891 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:58.531173 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:58.531217 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:59.030892 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:59.030968 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:59.031306 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:59.531057 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:59.531132 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:59.531418 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:00.031208 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:00.031315 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:00.031840 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:00.530199 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:00.530272 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:00.530592 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:01.030181 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:01.030256 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:01.030519 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:01.030564 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:01.530209 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:01.530280 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:01.530610 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:02.030330 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:02.030414 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:02.030762 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:02.530189 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:02.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:02.530545 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:03.030260 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:03.030353 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:03.030693 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:03.030751 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:03.530210 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:03.530286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:03.530616 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:04.030191 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:04.030264 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:04.030560 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:04.530232 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:04.530306 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:04.530651 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:05.030671 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:05.030756 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:05.031092 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:05.031143 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:05.530868 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:05.530936 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:05.531271 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:06.031105 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:06.031190 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:06.031557 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:06.530208 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:06.530279 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:06.530617 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:07.030293 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:07.030367 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:07.030644 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:07.530306 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:07.530387 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:07.530723 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:07.530782 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:08.030495 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:08.030574 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:08.030934 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:08.530198 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:08.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:08.530588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:09.030617 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:09.030710 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:09.031007 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:09.530235 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:09.530313 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:09.530669 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:10.030528 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:10.030602 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:10.030907 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:10.030956 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:10.530627 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:10.530703 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:10.531097 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:11.030974 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:11.031061 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:11.031452 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:11.530139 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:11.530212 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:11.530499 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:12.030241 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:12.030318 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:12.030668 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:12.530367 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:12.530446 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:12.530785 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:12.530838 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:13.030512 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:13.030590 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:13.030926 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:13.530231 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:13.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:13.530675 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:14.030433 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:14.030532 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:14.030898 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:14.530191 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:14.530265 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:14.530525 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:15.030561 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:15.030642 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:15.031035 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:15.031108 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:15.530771 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:15.530851 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:15.531186 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:16.030941 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:16.031058 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:16.031367 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:16.531127 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:16.531204 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:16.531551 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:17.030268 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:17.030359 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:17.030730 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:17.530432 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:17.530503 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:17.530762 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:17.530802 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:18.030287 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:18.030373 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:18.030726 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:18.530417 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:18.530491 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:18.530823 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:19.030615 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:19.030686 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:19.030957 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:19.530751 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:19.530822 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:19.531145 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:19.531219 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:20.030996 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:20.031081 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:20.031466 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:20.530134 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:20.530218 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:20.530480 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:21.030224 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:21.030315 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:21.030716 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:21.530405 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:21.530495 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:21.530849 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:22.030215 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:22.030290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:22.030563 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:22.030612 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:22.530260 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:22.530336 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:22.530660 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:23.030248 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:23.030322 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:23.030668 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:23.530351 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:23.530432 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:23.530727 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:24.030221 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:24.030298 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:24.030639 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:24.030700 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:24.530199 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:24.530274 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:24.530592 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:25.030528 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:25.030601 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:25.030897 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:25.530568 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:25.530644 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:25.531019 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:26.030843 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:26.030932 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:26.031265 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:26.031321 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:26.531018 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:26.531091 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:26.531351 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:27.031180 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:27.031252 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:27.031581 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:27.530252 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:27.530331 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:27.530667 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:28.030211 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:28.030284 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:28.030564 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:28.530284 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:28.530361 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:28.530659 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:28.530707 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:29.030650 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:29.030721 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:29.031052 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:29.530749 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:29.530823 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:29.531149 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:30.031046 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:30.031137 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:30.031519 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:30.530216 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:30.530313 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:30.530684 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:30.530743 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:31.030206 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:31.030288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:31.030560 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:31.530234 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:31.530321 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:31.530704 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:32.030432 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:32.030511 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:32.030861 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:32.530557 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:32.530633 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:32.530986 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:32.531075 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:33.030858 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:33.030935 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:33.031277 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:33.531109 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:33.531187 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:33.531577 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:34.030306 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:34.030382 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:34.030753 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:34.530223 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:34.530319 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:34.530708 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:35.030567 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:35.030667 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:35.031054 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:35.031114 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:35.530355 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:35.530425 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:35.530748 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:36.030259 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:36.030360 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:36.030744 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:36.530439 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:36.530514 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:36.530839 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:37.030151 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:37.030234 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:37.030595 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:37.530363 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:37.530442 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:37.530778 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:37.530853 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:38.030262 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:38.030348 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:38.030702 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:38.530390 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:38.530461 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:38.530733 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:39.030687 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:39.030760 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:39.031111 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:39.530923 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:39.531001 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:39.531339 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:39.531397 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:40.030955 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:40.031030 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:40.031319 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:40.531063 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:40.531139 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:40.531495 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:41.031163 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:41.031238 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:41.031591 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:41.530249 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:41.530323 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:41.530587 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:42.030362 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:42.030453 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:42.030854 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:42.030924 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:42.530577 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:42.530657 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:42.531023 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:43.030790 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:43.030866 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:43.031190 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:43.530930 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:43.531021 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:43.531357 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:44.031028 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:44.031107 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:44.031450 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:44.031512 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:44.530164 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:44.530233 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:44.530544 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:45.031170 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:45.031261 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:45.031590 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:45.530211 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:45.530293 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:45.530682 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:46.030354 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:46.030422 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:46.030698 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:46.530232 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:46.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:46.530634 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:46.530696 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:47.030366 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:47.030444 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:47.030742 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:47.530404 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:47.530478 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:47.530752 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:48.030496 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:48.030575 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:48.030882 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:48.530214 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:48.530292 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:48.530633 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:49.030376 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:49.030444 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:49.030775 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:49.030832 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:49.530474 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:49.530545 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:49.530877 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:50.030914 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:50.030991 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:50.031360 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:50.531113 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:50.531187 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:50.531458 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:51.030169 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:51.030240 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:51.030588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:51.530249 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:51.530328 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:51.530647 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:51.530704 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:52.030197 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:52.030269 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:52.030691 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:52.530433 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:52.530506 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:52.530821 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:53.030494 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:53.030577 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:53.030964 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:53.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:53.530257 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:53.530535 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:54.030277 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:54.030388 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:54.030914 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:54.030974 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:54.530212 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:54.530304 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:54.530629 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:55.030620 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:55.030692 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:55.030975 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:55.530238 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:55.530313 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:55.530627 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:56.030306 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:56.030401 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:56.030753 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:56.530440 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:56.530509 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:56.530781 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:56.530820 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:57.030493 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:57.030591 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:57.030923 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:57.530749 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:57.530825 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:57.531153 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:58.030917 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:58.030996 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:58.031309 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:58.530552 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:58.530658 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:58.531261 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:58.531332 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:59.031089 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:59.031172 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:59.031521 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:59.530216 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:59.530308 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:59.530593 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:00.030748 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:00.030831 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:00.031142 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:00.530904 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:00.530989 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:00.531375 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:00.531435 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:01.030988 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:01.031055 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:01.031330 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:01.531137 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:01.531217 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:01.531543 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:02.030239 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:02.030312 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:02.030660 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:02.530197 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:02.530277 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:02.530553 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:03.030250 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:03.030323 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:03.030669 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:03.030723 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:03.530375 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:03.530452 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:03.530800 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:04.030214 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:04.030295 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:04.030627 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:04.530317 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:04.530420 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:04.530765 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:05.030596 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:05.030677 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:05.031020 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:05.031084 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:05.530367 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:05.530441 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:05.530720 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:06.030267 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:06.030359 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:06.030720 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:06.530416 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:06.530489 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:06.530819 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:07.030188 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:07.030264 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:07.030539 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:07.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:07.530283 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:07.530594 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:07.530643 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:08.030372 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:08.030476 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:08.030889 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:08.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:08.530253 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:08.530521 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:09.031107 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:09.031182 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:09.031487 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:09.530173 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:09.530246 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:09.530588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:10.030187 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:10.030261 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:10.030583 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:10.030632 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:10.530212 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:10.530285 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:10.530616 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:11.030321 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:11.030401 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:11.030717 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:11.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:11.530260 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:11.530567 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:12.030265 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:12.030345 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:12.030692 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:12.030750 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:12.530410 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:12.530491 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:12.530831 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:13.030508 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:13.030583 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:13.030845 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:13.530215 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:13.530293 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:13.530600 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:14.030276 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:14.030355 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:14.030683 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:14.530187 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:14.530269 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:14.530570 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:14.530619 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:15.030634 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:15.030728 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:15.031132 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:15.530902 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:15.530978 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:15.531320 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:16.031133 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:16.031225 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:16.031608 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:16.530217 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:16.530290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:16.530631 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:16.530691 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:17.030347 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:17.030434 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:17.030771 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:17.530190 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:17.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:17.530547 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:18.030304 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:18.030396 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:18.030847 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:18.530228 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:18.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:18.530658 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:19.030405 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:19.030487 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:19.030753 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:19.030793 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:19.530210 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:19.530287 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:19.530630 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:20.030471 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:20.030552 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:20.030904 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:20.530566 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:20.530649 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:20.530928 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:21.030264 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:21.030341 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:21.030695 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:21.530387 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:21.530465 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:21.530798 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:21.530854 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:22.030489 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:22.030560 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:22.030836 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:22.530229 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:22.530311 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:22.530651 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:23.030359 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:23.030435 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:23.030790 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:23.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:23.530257 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:23.530538 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:24.030285 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:24.030362 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:24.030720 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:24.030784 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:24.530459 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:24.530561 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:24.530890 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:25.030748 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:25.030819 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:25.031092 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:25.530805 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:25.530886 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:25.531202 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:26.030981 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:26.031066 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:26.031447 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:26.031506 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:26.530164 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:26.530240 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:26.530509 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:27.030231 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:27.030322 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:27.030709 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:27.530233 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:27.530309 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:27.530655 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:28.030347 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:28.030421 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:28.030710 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:28.530223 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:28.530325 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:28.530633 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:28.530685 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:29.030667 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:29.030745 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:29.031082 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:29.530788 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:29.530865 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:29.531130 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:30.031110 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:30.031196 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:30.031505 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:30.530206 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:30.530283 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:30.530647 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:30.530712 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:31.030226 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:31.030304 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:31.030570 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:31.530239 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:31.530328 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:31.530675 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:32.030374 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:32.030452 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:32.030786 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:32.530472 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:32.530546 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:32.530828 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:32.530875 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:33.030256 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:33.030344 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:33.030721 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:33.530199 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:33.530277 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:33.530635 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:34.030369 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:34.030448 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:34.030741 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:34.530447 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:34.530523 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:34.530886 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:34.530946 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:35.030720 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:35.030802 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:35.031141 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:35.530926 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:35.530998 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:35.531261 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:36.031086 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:36.031162 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:36.031504 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:36.530199 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:36.530272 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:36.530613 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:37.030185 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:37.030264 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:37.030544 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:37.030595 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:37.530230 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:37.530306 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:37.530649 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:38.030265 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:38.030343 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:38.030718 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:38.530403 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:38.530475 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:38.530749 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:39.030746 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:39.030820 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:39.031148 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:39.031206 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:39.530917 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:39.530990 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:39.531311 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:40.031557 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:40.031645 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:40.032005 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:40.530747 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:40.530827 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:40.531128 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:41.030902 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:41.030984 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:41.031306 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:41.031363 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:41.531112 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:41.531189 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:41.531510 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:42.030251 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:42.030333 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:42.030713 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:42.530283 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:42.530359 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:42.530684 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:43.030191 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:43.030262 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:43.030522 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:43.530207 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:43.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:43.530656 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:43.530713 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:44.030378 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:44.030455 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:44.030782 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:44.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:44.530264 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:44.530529 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:45.030577 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:45.030660 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:45.030993 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:45.530696 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:45.530777 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:45.531097 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:45.531153 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:46.030857 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:46.030933 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:46.031262 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:46.530790 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:46.530867 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:46.531226 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:47.031053 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:47.031133 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:47.031466 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:47.530800 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:47.530875 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:47.531148 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:47.531197 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:48.030951 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:48.031025 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:48.031389 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:48.531201 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:48.531290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:48.531669 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:49.030366 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:49.030437 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:49.030705 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:49.530219 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:49.530300 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:49.530631 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:50.030617 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:50.030700 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:50.031089 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:50.031150 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:50.530889 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:50.530962 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:50.531299 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:51.031099 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:51.031173 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:51.031503 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:51.530225 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:51.530303 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:51.530635 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:52.030871 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:52.030945 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:52.031227 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:52.031267 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:52.531049 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:52.531125 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:52.531452 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:53.030183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:53.030272 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:53.030616 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:53.530338 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:53.530458 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:53.530734 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:54.030429 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:54.030506 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:54.030853 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:54.530381 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:54.530458 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:54.530812 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:54.530872 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:55.030902 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:55.030979 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:55.031278 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:55.531074 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:55.531160 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:55.531517 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:56.030253 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:56.030336 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:56.030686 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:56.530868 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:56.530948 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:56.531219 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:56.531259 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:57.031079 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:57.031159 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:57.031538 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:57.530232 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:57.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:57.530630 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:58.030200 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:58.030274 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:58.030548 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:58.530218 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:58.530297 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:58.530631 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:59.030407 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:59.030481 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:59.030824 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:59.030888 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:59.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:59.530255 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:59.530525 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:00.030580 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:00.030665 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:00.031061 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:00.536031 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:00.536130 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:00.536497 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:01.030384 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:01.030461 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:01.030805 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:01.530510 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:01.530586 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:01.531038 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:01.531093 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:02.030519 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:02.030596 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:02.030885 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:02.530601 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:02.530679 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:02.531024 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:03.030770 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:03.030845 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:03.031172 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:03.530923 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:03.531000 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:03.531348 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:03.531399 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:04.031188 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:04.031267 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:04.031578 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:04.530221 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:04.530301 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:04.530665 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:05.030434 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:05.030511 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:05.030794 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:05.530486 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:05.530562 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:05.530936 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:06.030547 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:06.030629 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:06.031023 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:06.031086 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:06.530803 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:06.530879 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:06.531191 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:07.031034 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:07.031122 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:07.031472 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:07.530868 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:07.530945 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:07.531250 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:08.031006 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:08.031085 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:08.031378 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:08.031422 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:08.530162 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:08.530246 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:08.530602 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:09.030388 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:09.030461 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:09.030757 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:09.530206 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:09.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:09.530546 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:10.031113 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:10.031190 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:10.031553 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:10.031610 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:10.530237 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:10.530307 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:10.530634 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:11.030979 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:11.031054 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:11.031384 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:11.531138 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:11.531212 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:11.531564 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:12.030178 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:12.030257 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:12.030588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:12.530188 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:12.530255 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:12.530534 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:12.530573 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:13.030280 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:13.030360 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:13.030766 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:13.530219 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:13.530304 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:13.530671 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:14.030210 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:14.030285 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:14.030552 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:14.530221 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:14.530293 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:14.530608 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:14.530657 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:15.030494 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:15.030575 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:15.030910 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:15.530437 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:15.530554 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:15.530900 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:16.030216 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:16.030305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:16.030644 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:16.530338 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:16.530413 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:16.530783 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:16.530841 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:17.030494 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:17.030561 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:17.030881 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:17.530218 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:17.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:17.530621 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:18.030226 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:18.030315 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:18.030655 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:18.530196 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:18.530278 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:18.530553 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:19.031062 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:19.031145 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:19.031472 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:19.031531 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:19.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:19.530282 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:19.530610 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:20.030544 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:20.030624 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:20.030925 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:20.530202 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:20.530275 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:20.530644 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:21.030361 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:21.030463 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:21.030812 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:21.530486 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:21.530558 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:21.530871 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:21.530921 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:22.030256 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:22.030338 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:22.030680 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:22.530233 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:22.530313 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:22.530661 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:23.030227 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:23.030300 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:23.030564 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:23.530250 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:23.530342 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:23.530762 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:24.030398 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:24.030590 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:24.031036 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:24.031106 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:24.530825 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:24.530892 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:24.531165 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:25.031145 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:25.031231 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:25.031590 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:25.530287 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:25.530385 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:25.530787 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:26.030475 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:26.030551 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:26.030935 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:26.530656 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:26.530743 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:26.531109 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:26.531165 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:27.030982 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:27.031070 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:27.031412 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:27.530790 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:27.530858 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:27.531125 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:28.030995 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:28.031074 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:28.031452 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:28.530173 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:28.530254 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:28.530600 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:29.030339 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:29.030432 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:29.030724 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:29.030767 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:29.530501 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:29.530583 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:29.530943 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:30.030872 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:30.030956 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:30.031277 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:30.531026 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:30.531096 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:30.531388 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:31.031173 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:31.031248 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:31.031592 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:31.031655 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:31.530219 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:31.530290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:31.530619 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:32.030304 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:32.030380 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:32.030665 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:32.530207 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:32.530286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:32.530634 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:33.030347 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:33.030429 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:33.030767 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:33.530186 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:33.530259 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:33.530528 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:33.530569 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:34.030234 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:34.030320 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:34.030648 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:34.530224 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:34.530303 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:34.530668 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:35.030514 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:35.030598 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:35.030879 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:35.530539 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:35.530621 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:35.530944 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:35.530999 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:36.030792 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:36.030868 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:36.031197 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:36.530952 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:36.531027 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:36.531293 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:37.031128 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:37.031222 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:37.031596 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:37.530210 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:37.530284 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:37.530618 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:38.030192 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:38.030278 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:38.030552 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:38.030630 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:38.530218 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:38.530293 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:38.530633 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:39.030659 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:39.030738 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:39.031056 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:39.530839 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:39.530914 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:39.531181 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:40.031117 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:40.031198 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:40.031558 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:40.031631 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:40.530210 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:40.530292 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:40.530625 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:41.030178 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:41.030256 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:41.030535 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:41.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:41.530290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:41.530641 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:42.030366 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:42.030455 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:42.030891 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:42.530441 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:42.530519 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:42.530792 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:42.530833 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:43.030494 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:43.030585 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:43.030904 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:43.530228 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:43.530306 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:43.530630 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:44.030307 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:44.030381 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:44.030707 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:44.530227 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:44.530296 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:44.530588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:45.031339 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:45.031427 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:45.031745 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:45.031809 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:45.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:45.530276 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:45.530545 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:46.030239 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:46.030321 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:46.030687 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:46.530366 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:46.530439 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:46.530762 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:47.030423 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:47.030519 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:47.030809 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:47.530502 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:47.530580 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:47.530914 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:47.530970 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:48.030658 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:48.030731 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:48.031047 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:48.530357 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:48.530426 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:48.530764 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:49.030805 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:49.030882 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:49.031204 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:49.530982 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:49.531053 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:49.531372 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:49.531427 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:50.031030 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:50.031115 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:50.031532 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:50.530205 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:50.530299 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:50.530623 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:51.030207 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:51.030286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:51.030614 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:51.530323 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:51.530401 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:51.530711 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:52.030254 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:52.030329 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:52.030627 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:52.030687 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:52.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:52.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:52.530659 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:53.030195 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:53.030278 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:53.030640 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:53.530268 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:53.530359 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:53.530765 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:54.030501 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:54.030589 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:54.030906 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:54.030956 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:54.530370 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:54.530458 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:54.530775 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:55.030784 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:55.030866 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:55.031248 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:55.531028 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:55.531111 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:55.531412 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:56.031149 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:56.031232 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:56.031533 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:56.031587 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:56.530201 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:56.530282 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:56.530600 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:57.030260 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:57.030338 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:57.030673 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:57.530209 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:57.530285 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:57.530565 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:58.030268 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:58.030347 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:58.030688 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:58.530418 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:58.530493 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:58.530876 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:58.530935 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:59.030221 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:59.030349 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:59.030710 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:59.530411 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:59.530486 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:59.530845 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:46:00.030785 2968376 type.go:168] "Request Body" body=""
	I1217 10:46:00.030868 2968376 node_ready.go:38] duration metric: took 6m0.00085226s for node "functional-232588" to be "Ready" ...
	I1217 10:46:00.039967 2968376 out.go:203] 
	W1217 10:46:00.043066 2968376 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 10:46:00.043095 2968376 out.go:285] * 
	W1217 10:46:00.047185 2968376 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 10:46:00.056487 2968376 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.457543735Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.457644238Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.457744256Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.457820841Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.457879235Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.457938007Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.457996279Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.458061483Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.458126967Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.458208163Z" level=info msg="Connect containerd service"
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.458558284Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.459206966Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.475859249Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.475925241Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.475972108Z" level=info msg="Start subscribing containerd event"
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.476024300Z" level=info msg="Start recovering state"
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.528406781Z" level=info msg="Start event monitor"
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.528633279Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.528730630Z" level=info msg="Start streaming server"
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.528823026Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.529043731Z" level=info msg="runtime interface starting up..."
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.529125706Z" level=info msg="starting plugins..."
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.529191904Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 10:39:57 functional-232588 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 17 10:39:57 functional-232588 containerd[5229]: time="2025-12-17T10:39:57.529865783Z" level=info msg="containerd successfully booted in 0.097082s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:46:04.333378    8603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:46:04.333876    8603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:46:04.335485    8603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:46:04.336119    8603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:46:04.337055    8603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +26.532481] overlayfs: idmapped layers are currently not supported
	[Dec17 09:26] overlayfs: idmapped layers are currently not supported
	[Dec17 09:27] overlayfs: idmapped layers are currently not supported
	[Dec17 09:29] overlayfs: idmapped layers are currently not supported
	[Dec17 09:31] overlayfs: idmapped layers are currently not supported
	[Dec17 09:41] overlayfs: idmapped layers are currently not supported
	[Dec17 09:43] overlayfs: idmapped layers are currently not supported
	[Dec17 09:44] overlayfs: idmapped layers are currently not supported
	[  +5.066669] overlayfs: idmapped layers are currently not supported
	[ +38.827173] overlayfs: idmapped layers are currently not supported
	[Dec17 09:45] overlayfs: idmapped layers are currently not supported
	[Dec17 09:46] overlayfs: idmapped layers are currently not supported
	[Dec17 09:48] overlayfs: idmapped layers are currently not supported
	[  +5.468161] overlayfs: idmapped layers are currently not supported
	[Dec17 09:49] overlayfs: idmapped layers are currently not supported
	[  +4.263444] overlayfs: idmapped layers are currently not supported
	[Dec17 09:50] overlayfs: idmapped layers are currently not supported
	[Dec17 10:07] overlayfs: idmapped layers are currently not supported
	[Dec17 10:08] overlayfs: idmapped layers are currently not supported
	[Dec17 10:10] overlayfs: idmapped layers are currently not supported
	[Dec17 10:11] overlayfs: idmapped layers are currently not supported
	[Dec17 10:13] overlayfs: idmapped layers are currently not supported
	[Dec17 10:15] overlayfs: idmapped layers are currently not supported
	[Dec17 10:16] overlayfs: idmapped layers are currently not supported
	[Dec17 10:21] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 10:46:04 up 16:28,  0 user,  load average: 0.54, 0.31, 0.78
	Linux functional-232588 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 10:46:01 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:46:01 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 17 10:46:01 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:02 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:02 functional-232588 kubelet[8463]: E1217 10:46:02.075789    8463 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:46:02 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:46:02 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:46:02 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 17 10:46:02 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:02 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:02 functional-232588 kubelet[8478]: E1217 10:46:02.833749    8478 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:46:02 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:46:02 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:46:03 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 813.
	Dec 17 10:46:03 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:03 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:03 functional-232588 kubelet[8513]: E1217 10:46:03.562126    8513 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:46:03 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:46:03 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:46:04 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 814.
	Dec 17 10:46:04 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:04 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:04 functional-232588 kubelet[8596]: E1217 10:46:04.318717    8596 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:46:04 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:46:04 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588: exit status 2 (401.997331ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-232588" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (2.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (2.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 kubectl -- --context functional-232588 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232588 kubectl -- --context functional-232588 get pods: exit status 1 (104.311246ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-232588 kubectl -- --context functional-232588 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-232588
helpers_test.go:244: (dbg) docker inspect functional-232588:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55",
	        "Created": "2025-12-17T10:31:38.417629873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2962990,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T10:31:38.484538313Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/hostname",
	        "HostsPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/hosts",
	        "LogPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55-json.log",
	        "Name": "/functional-232588",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-232588:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-232588",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55",
	                "LowerDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813-init/diff:/var/lib/docker/overlay2/aa1c3cb837db05afa9c265c464cc269fa9c11658f422c1c8858e1287ac952f12/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-232588",
	                "Source": "/var/lib/docker/volumes/functional-232588/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-232588",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-232588",
	                "name.minikube.sigs.k8s.io": "functional-232588",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cf91fdea6bf1c59282af014fad74b29d2456698ebca9b6be8c9685054b7d7df4",
	            "SandboxKey": "/var/run/docker/netns/cf91fdea6bf1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35733"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35734"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35737"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35735"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35736"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-232588": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:06:f9:5f:98:3e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf7a63e100f8f97f0cb760b53960a5eaeb1f5054bead79442486fc1d51c01ab7",
	                    "EndpointID": "a3f4d6de946fb68269c7790dce129934f895a840ec5cebbe87fc0d49cb575c44",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-232588",
	                        "f67a3fa8da99"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-232588 -n functional-232588
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232588 -n functional-232588: exit status 2 (333.998817ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                         ARGS                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-626013 image ls --format short --alsologtostderr                                                                                           │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image   │ functional-626013 image ls --format yaml --alsologtostderr                                                                                            │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ ssh     │ functional-626013 ssh pgrep buildkitd                                                                                                                 │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │                     │
	│ image   │ functional-626013 image ls --format json --alsologtostderr                                                                                            │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image   │ functional-626013 image ls --format table --alsologtostderr                                                                                           │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image   │ functional-626013 image build -t localhost/my-image:functional-626013 testdata/build --alsologtostderr                                                │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image   │ functional-626013 image ls                                                                                                                            │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ delete  │ -p functional-626013                                                                                                                                  │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ start   │ -p functional-232588 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │                     │
	│ start   │ -p functional-232588 --alsologtostderr -v=8                                                                                                           │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:39 UTC │                     │
	│ cache   │ functional-232588 cache add registry.k8s.io/pause:3.1                                                                                                 │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ functional-232588 cache add registry.k8s.io/pause:3.3                                                                                                 │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ functional-232588 cache add registry.k8s.io/pause:latest                                                                                              │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ functional-232588 cache add minikube-local-cache-test:functional-232588                                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ functional-232588 cache delete minikube-local-cache-test:functional-232588                                                                            │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ list                                                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ ssh     │ functional-232588 ssh sudo crictl images                                                                                                              │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ ssh     │ functional-232588 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                    │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ ssh     │ functional-232588 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │                     │
	│ cache   │ functional-232588 cache reload                                                                                                                        │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ ssh     │ functional-232588 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                   │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ kubectl │ functional-232588 kubectl -- --context functional-232588 get pods                                                                                     │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 10:39:54
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 10:39:54.887492 2968376 out.go:360] Setting OutFile to fd 1 ...
	I1217 10:39:54.887669 2968376 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:39:54.887679 2968376 out.go:374] Setting ErrFile to fd 2...
	I1217 10:39:54.887684 2968376 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:39:54.887953 2968376 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 10:39:54.888377 2968376 out.go:368] Setting JSON to false
	I1217 10:39:54.889321 2968376 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":58945,"bootTime":1765909050,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 10:39:54.889394 2968376 start.go:143] virtualization:  
	I1217 10:39:54.892820 2968376 out.go:179] * [functional-232588] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 10:39:54.896642 2968376 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 10:39:54.896710 2968376 notify.go:221] Checking for updates...
	I1217 10:39:54.900325 2968376 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 10:39:54.903432 2968376 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:39:54.906306 2968376 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 10:39:54.909105 2968376 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 10:39:54.911889 2968376 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 10:39:54.915217 2968376 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 10:39:54.915331 2968376 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 10:39:54.937972 2968376 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 10:39:54.938091 2968376 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:39:55.000760 2968376 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 10:39:54.991784263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:39:55.000879 2968376 docker.go:319] overlay module found
	I1217 10:39:55.005745 2968376 out.go:179] * Using the docker driver based on existing profile
	I1217 10:39:55.010762 2968376 start.go:309] selected driver: docker
	I1217 10:39:55.010794 2968376 start.go:927] validating driver "docker" against &{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:39:55.010914 2968376 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 10:39:55.011044 2968376 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:39:55.065164 2968376 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 10:39:55.056463493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:39:55.065569 2968376 cni.go:84] Creating CNI manager for ""
	I1217 10:39:55.065633 2968376 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 10:39:55.065694 2968376 start.go:353] cluster config:
	{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:39:55.070664 2968376 out.go:179] * Starting "functional-232588" primary control-plane node in "functional-232588" cluster
	I1217 10:39:55.073373 2968376 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 10:39:55.076286 2968376 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 10:39:55.079282 2968376 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 10:39:55.079315 2968376 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 10:39:55.079350 2968376 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1217 10:39:55.079358 2968376 cache.go:65] Caching tarball of preloaded images
	I1217 10:39:55.079437 2968376 preload.go:238] Found /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 10:39:55.079447 2968376 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1217 10:39:55.079550 2968376 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/config.json ...
	I1217 10:39:55.100219 2968376 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 10:39:55.100251 2968376 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 10:39:55.100265 2968376 cache.go:243] Successfully downloaded all kic artifacts
	I1217 10:39:55.100297 2968376 start.go:360] acquireMachinesLock for functional-232588: {Name:mkb7828f32963a62377c74058da795e63eb677f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 10:39:55.100355 2968376 start.go:364] duration metric: took 36.061µs to acquireMachinesLock for "functional-232588"
	I1217 10:39:55.100378 2968376 start.go:96] Skipping create...Using existing machine configuration
	I1217 10:39:55.100389 2968376 fix.go:54] fixHost starting: 
	I1217 10:39:55.100690 2968376 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:39:55.118322 2968376 fix.go:112] recreateIfNeeded on functional-232588: state=Running err=<nil>
	W1217 10:39:55.118352 2968376 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 10:39:55.121614 2968376 out.go:252] * Updating the running docker "functional-232588" container ...
	I1217 10:39:55.121666 2968376 machine.go:94] provisionDockerMachine start ...
	I1217 10:39:55.121762 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:55.140448 2968376 main.go:143] libmachine: Using SSH client type: native
	I1217 10:39:55.140568 2968376 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:39:55.140576 2968376 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 10:39:55.272992 2968376 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232588
	
	I1217 10:39:55.273058 2968376 ubuntu.go:182] provisioning hostname "functional-232588"
	I1217 10:39:55.273155 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:55.294100 2968376 main.go:143] libmachine: Using SSH client type: native
	I1217 10:39:55.294200 2968376 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:39:55.294209 2968376 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-232588 && echo "functional-232588" | sudo tee /etc/hostname
	I1217 10:39:55.433566 2968376 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232588
	
	I1217 10:39:55.433651 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:55.452012 2968376 main.go:143] libmachine: Using SSH client type: native
	I1217 10:39:55.452130 2968376 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:39:55.452152 2968376 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-232588' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-232588/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-232588' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 10:39:55.584734 2968376 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 10:39:55.584801 2968376 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 10:39:55.584835 2968376 ubuntu.go:190] setting up certificates
	I1217 10:39:55.584846 2968376 provision.go:84] configureAuth start
	I1217 10:39:55.584917 2968376 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232588
	I1217 10:39:55.602169 2968376 provision.go:143] copyHostCerts
	I1217 10:39:55.602226 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 10:39:55.602261 2968376 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 10:39:55.602273 2968376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 10:39:55.602347 2968376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 10:39:55.602482 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 10:39:55.602507 2968376 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 10:39:55.602512 2968376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 10:39:55.602540 2968376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 10:39:55.602588 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 10:39:55.602609 2968376 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 10:39:55.602618 2968376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 10:39:55.602651 2968376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 10:39:55.602701 2968376 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.functional-232588 san=[127.0.0.1 192.168.49.2 functional-232588 localhost minikube]
	I1217 10:39:55.859794 2968376 provision.go:177] copyRemoteCerts
	I1217 10:39:55.859877 2968376 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 10:39:55.859950 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:55.877144 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:55.974879 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 10:39:55.974962 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 10:39:55.992960 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 10:39:55.993024 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 10:39:56.017007 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 10:39:56.017075 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 10:39:56.039037 2968376 provision.go:87] duration metric: took 454.177473ms to configureAuth
	I1217 10:39:56.039062 2968376 ubuntu.go:206] setting minikube options for container-runtime
	I1217 10:39:56.039248 2968376 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 10:39:56.039255 2968376 machine.go:97] duration metric: took 917.583269ms to provisionDockerMachine
	I1217 10:39:56.039263 2968376 start.go:293] postStartSetup for "functional-232588" (driver="docker")
	I1217 10:39:56.039274 2968376 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 10:39:56.039330 2968376 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 10:39:56.039374 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:56.064674 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:56.164379 2968376 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 10:39:56.167903 2968376 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1217 10:39:56.167924 2968376 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1217 10:39:56.167929 2968376 command_runner.go:130] > VERSION_ID="12"
	I1217 10:39:56.167934 2968376 command_runner.go:130] > VERSION="12 (bookworm)"
	I1217 10:39:56.167939 2968376 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1217 10:39:56.167943 2968376 command_runner.go:130] > ID=debian
	I1217 10:39:56.167947 2968376 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1217 10:39:56.167952 2968376 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1217 10:39:56.167958 2968376 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1217 10:39:56.168026 2968376 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 10:39:56.168043 2968376 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 10:39:56.168054 2968376 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 10:39:56.168116 2968376 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 10:39:56.168193 2968376 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 10:39:56.168199 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> /etc/ssl/certs/29245742.pem
	I1217 10:39:56.168276 2968376 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts -> hosts in /etc/test/nested/copy/2924574
	I1217 10:39:56.168280 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts -> /etc/test/nested/copy/2924574/hosts
	I1217 10:39:56.168325 2968376 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2924574
	I1217 10:39:56.175992 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 10:39:56.194065 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts --> /etc/test/nested/copy/2924574/hosts (40 bytes)
	I1217 10:39:56.211618 2968376 start.go:296] duration metric: took 172.340234ms for postStartSetup
	I1217 10:39:56.211696 2968376 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 10:39:56.211740 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:56.229142 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:56.321408 2968376 command_runner.go:130] > 18%
	I1217 10:39:56.321497 2968376 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 10:39:56.325775 2968376 command_runner.go:130] > 160G
	I1217 10:39:56.326243 2968376 fix.go:56] duration metric: took 1.225850623s for fixHost
	I1217 10:39:56.326261 2968376 start.go:83] releasing machines lock for "functional-232588", held for 1.22589425s
	I1217 10:39:56.326382 2968376 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232588
	I1217 10:39:56.351440 2968376 ssh_runner.go:195] Run: cat /version.json
	I1217 10:39:56.351467 2968376 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 10:39:56.351509 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:56.351532 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:56.377953 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:56.378286 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:56.472298 2968376 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1217 10:39:56.558575 2968376 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1217 10:39:56.561329 2968376 ssh_runner.go:195] Run: systemctl --version
	I1217 10:39:56.567378 2968376 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1217 10:39:56.567418 2968376 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1217 10:39:56.567866 2968376 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1217 10:39:56.572178 2968376 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1217 10:39:56.572242 2968376 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 10:39:56.572327 2968376 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 10:39:56.580077 2968376 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 10:39:56.580102 2968376 start.go:496] detecting cgroup driver to use...
	I1217 10:39:56.580153 2968376 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 10:39:56.580207 2968376 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 10:39:56.595473 2968376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 10:39:56.608619 2968376 docker.go:218] disabling cri-docker service (if available) ...
	I1217 10:39:56.608683 2968376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 10:39:56.624626 2968376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 10:39:56.639198 2968376 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 10:39:56.750544 2968376 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 10:39:56.881240 2968376 docker.go:234] disabling docker service ...
	I1217 10:39:56.881321 2968376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 10:39:56.896533 2968376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 10:39:56.909686 2968376 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 10:39:57.029179 2968376 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 10:39:57.147650 2968376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 10:39:57.160165 2968376 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 10:39:57.172821 2968376 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1217 10:39:57.174291 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 10:39:57.183184 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 10:39:57.192049 2968376 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 10:39:57.192173 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 10:39:57.201301 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 10:39:57.210430 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 10:39:57.219288 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 10:39:57.228051 2968376 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 10:39:57.235994 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 10:39:57.245724 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 10:39:57.254416 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 10:39:57.263062 2968376 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 10:39:57.269668 2968376 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1217 10:39:57.270584 2968376 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 10:39:57.278345 2968376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 10:39:57.386138 2968376 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 10:39:57.532674 2968376 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 10:39:57.532750 2968376 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 10:39:57.536608 2968376 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1217 10:39:57.536637 2968376 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1217 10:39:57.536644 2968376 command_runner.go:130] > Device: 0,72	Inode: 1613        Links: 1
	I1217 10:39:57.536652 2968376 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 10:39:57.536659 2968376 command_runner.go:130] > Access: 2025-12-17 10:39:57.469772687 +0000
	I1217 10:39:57.536664 2968376 command_runner.go:130] > Modify: 2025-12-17 10:39:57.469772687 +0000
	I1217 10:39:57.536669 2968376 command_runner.go:130] > Change: 2025-12-17 10:39:57.469772687 +0000
	I1217 10:39:57.536673 2968376 command_runner.go:130] >  Birth: -
	I1217 10:39:57.537168 2968376 start.go:564] Will wait 60s for crictl version
	I1217 10:39:57.537224 2968376 ssh_runner.go:195] Run: which crictl
	I1217 10:39:57.540827 2968376 command_runner.go:130] > /usr/local/bin/crictl
	I1217 10:39:57.541302 2968376 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 10:39:57.573267 2968376 command_runner.go:130] > Version:  0.1.0
	I1217 10:39:57.573463 2968376 command_runner.go:130] > RuntimeName:  containerd
	I1217 10:39:57.573480 2968376 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1217 10:39:57.573656 2968376 command_runner.go:130] > RuntimeApiVersion:  v1
	I1217 10:39:57.575908 2968376 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 10:39:57.575979 2968376 ssh_runner.go:195] Run: containerd --version
	I1217 10:39:57.593702 2968376 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1217 10:39:57.595828 2968376 ssh_runner.go:195] Run: containerd --version
	I1217 10:39:57.613025 2968376 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1217 10:39:57.620756 2968376 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1217 10:39:57.623690 2968376 cli_runner.go:164] Run: docker network inspect functional-232588 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 10:39:57.639560 2968376 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 10:39:57.643332 2968376 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1217 10:39:57.643691 2968376 kubeadm.go:884] updating cluster {Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 10:39:57.643808 2968376 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 10:39:57.643873 2968376 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 10:39:57.668138 2968376 command_runner.go:130] > {
	I1217 10:39:57.668155 2968376 command_runner.go:130] >   "images":  [
	I1217 10:39:57.668160 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668169 2968376 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 10:39:57.668174 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668179 2968376 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 10:39:57.668183 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668187 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668196 2968376 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1217 10:39:57.668199 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668204 2968376 command_runner.go:130] >       "size":  "40636774",
	I1217 10:39:57.668208 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668212 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668215 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668218 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668226 2968376 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 10:39:57.668231 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668236 2968376 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 10:39:57.668239 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668244 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668252 2968376 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 10:39:57.668260 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668264 2968376 command_runner.go:130] >       "size":  "8034419",
	I1217 10:39:57.668267 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668271 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668274 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668278 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668284 2968376 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 10:39:57.668288 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668293 2968376 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 10:39:57.668296 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668303 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668311 2968376 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1217 10:39:57.668314 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668319 2968376 command_runner.go:130] >       "size":  "21168808",
	I1217 10:39:57.668323 2968376 command_runner.go:130] >       "username":  "nonroot",
	I1217 10:39:57.668327 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668330 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668333 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668340 2968376 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1217 10:39:57.668344 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668348 2968376 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1217 10:39:57.668351 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668355 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668363 2968376 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1217 10:39:57.668366 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668370 2968376 command_runner.go:130] >       "size":  "21749640",
	I1217 10:39:57.668375 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.668379 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.668382 2968376 command_runner.go:130] >       },
	I1217 10:39:57.668386 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668390 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668393 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668396 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668405 2968376 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1217 10:39:57.668409 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668433 2968376 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1217 10:39:57.668438 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668442 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668450 2968376 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1217 10:39:57.668454 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668458 2968376 command_runner.go:130] >       "size":  "24692223",
	I1217 10:39:57.668461 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.668470 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.668478 2968376 command_runner.go:130] >       },
	I1217 10:39:57.668482 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668485 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668489 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668492 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668498 2968376 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1217 10:39:57.668503 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668509 2968376 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1217 10:39:57.668512 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668517 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668530 2968376 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1217 10:39:57.668537 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668542 2968376 command_runner.go:130] >       "size":  "20672157",
	I1217 10:39:57.668545 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.668549 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.668557 2968376 command_runner.go:130] >       },
	I1217 10:39:57.668562 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668576 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668580 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668583 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668589 2968376 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1217 10:39:57.668593 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668598 2968376 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1217 10:39:57.668608 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668614 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668622 2968376 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1217 10:39:57.668629 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668633 2968376 command_runner.go:130] >       "size":  "22432301",
	I1217 10:39:57.668637 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668641 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668645 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668648 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668655 2968376 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1217 10:39:57.668662 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668668 2968376 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1217 10:39:57.668672 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668678 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668689 2968376 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1217 10:39:57.668692 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668696 2968376 command_runner.go:130] >       "size":  "15405535",
	I1217 10:39:57.668702 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.668706 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.668719 2968376 command_runner.go:130] >       },
	I1217 10:39:57.668723 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668726 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668730 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668734 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668740 2968376 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 10:39:57.668748 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668753 2968376 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 10:39:57.668756 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668760 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668767 2968376 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1217 10:39:57.668773 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668777 2968376 command_runner.go:130] >       "size":  "267939",
	I1217 10:39:57.668781 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.668792 2968376 command_runner.go:130] >         "value":  "65535"
	I1217 10:39:57.668799 2968376 command_runner.go:130] >       },
	I1217 10:39:57.668803 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668807 2968376 command_runner.go:130] >       "pinned":  true
	I1217 10:39:57.668810 2968376 command_runner.go:130] >     }
	I1217 10:39:57.668813 2968376 command_runner.go:130] >   ]
	I1217 10:39:57.668816 2968376 command_runner.go:130] > }
	I1217 10:39:57.671107 2968376 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 10:39:57.671128 2968376 containerd.go:534] Images already preloaded, skipping extraction
	I1217 10:39:57.671185 2968376 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 10:39:57.697059 2968376 command_runner.go:130] > {
	I1217 10:39:57.697078 2968376 command_runner.go:130] >   "images":  [
	I1217 10:39:57.697083 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697093 2968376 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 10:39:57.697108 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697114 2968376 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 10:39:57.697118 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697122 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697131 2968376 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1217 10:39:57.697142 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697147 2968376 command_runner.go:130] >       "size":  "40636774",
	I1217 10:39:57.697155 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697159 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697162 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697166 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697175 2968376 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 10:39:57.697180 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697185 2968376 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 10:39:57.697188 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697192 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697202 2968376 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 10:39:57.697205 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697209 2968376 command_runner.go:130] >       "size":  "8034419",
	I1217 10:39:57.697213 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697216 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697219 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697222 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697229 2968376 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 10:39:57.697233 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697238 2968376 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 10:39:57.697242 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697249 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697256 2968376 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1217 10:39:57.697260 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697264 2968376 command_runner.go:130] >       "size":  "21168808",
	I1217 10:39:57.697268 2968376 command_runner.go:130] >       "username":  "nonroot",
	I1217 10:39:57.697272 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697275 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697278 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697284 2968376 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1217 10:39:57.697288 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697293 2968376 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1217 10:39:57.697296 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697300 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697310 2968376 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1217 10:39:57.697314 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697318 2968376 command_runner.go:130] >       "size":  "21749640",
	I1217 10:39:57.697323 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.697327 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.697330 2968376 command_runner.go:130] >       },
	I1217 10:39:57.697334 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697338 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697341 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697344 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697350 2968376 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1217 10:39:57.697354 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697359 2968376 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1217 10:39:57.697363 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697366 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697374 2968376 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1217 10:39:57.697377 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697381 2968376 command_runner.go:130] >       "size":  "24692223",
	I1217 10:39:57.697384 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.697393 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.697396 2968376 command_runner.go:130] >       },
	I1217 10:39:57.697400 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697403 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697406 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697409 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697416 2968376 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1217 10:39:57.697419 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697425 2968376 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1217 10:39:57.697428 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697432 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697440 2968376 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1217 10:39:57.697443 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697448 2968376 command_runner.go:130] >       "size":  "20672157",
	I1217 10:39:57.697460 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.697464 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.697467 2968376 command_runner.go:130] >       },
	I1217 10:39:57.697470 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697474 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697477 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697480 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697486 2968376 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1217 10:39:57.697490 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697495 2968376 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1217 10:39:57.697498 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697501 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697509 2968376 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1217 10:39:57.697512 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697515 2968376 command_runner.go:130] >       "size":  "22432301",
	I1217 10:39:57.697519 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697523 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697526 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697530 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697536 2968376 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1217 10:39:57.697540 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697545 2968376 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1217 10:39:57.697548 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697552 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697560 2968376 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1217 10:39:57.697563 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697567 2968376 command_runner.go:130] >       "size":  "15405535",
	I1217 10:39:57.697570 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.697574 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.697578 2968376 command_runner.go:130] >       },
	I1217 10:39:57.697581 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697585 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697588 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697594 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697600 2968376 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 10:39:57.697604 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697609 2968376 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 10:39:57.697612 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697615 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697622 2968376 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1217 10:39:57.697626 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697630 2968376 command_runner.go:130] >       "size":  "267939",
	I1217 10:39:57.697633 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.697637 2968376 command_runner.go:130] >         "value":  "65535"
	I1217 10:39:57.697641 2968376 command_runner.go:130] >       },
	I1217 10:39:57.697645 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697649 2968376 command_runner.go:130] >       "pinned":  true
	I1217 10:39:57.697652 2968376 command_runner.go:130] >     }
	I1217 10:39:57.697655 2968376 command_runner.go:130] >   ]
	I1217 10:39:57.697657 2968376 command_runner.go:130] > }
	I1217 10:39:57.699989 2968376 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 10:39:57.700059 2968376 cache_images.go:86] Images are preloaded, skipping loading
	I1217 10:39:57.700081 2968376 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1217 10:39:57.700225 2968376 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-232588 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 10:39:57.700311 2968376 ssh_runner.go:195] Run: sudo crictl info
	I1217 10:39:57.722782 2968376 command_runner.go:130] > {
	I1217 10:39:57.722800 2968376 command_runner.go:130] >   "cniconfig": {
	I1217 10:39:57.722805 2968376 command_runner.go:130] >     "Networks": [
	I1217 10:39:57.722813 2968376 command_runner.go:130] >       {
	I1217 10:39:57.722822 2968376 command_runner.go:130] >         "Config": {
	I1217 10:39:57.722827 2968376 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1217 10:39:57.722835 2968376 command_runner.go:130] >           "Name": "cni-loopback",
	I1217 10:39:57.722839 2968376 command_runner.go:130] >           "Plugins": [
	I1217 10:39:57.722843 2968376 command_runner.go:130] >             {
	I1217 10:39:57.722847 2968376 command_runner.go:130] >               "Network": {
	I1217 10:39:57.722851 2968376 command_runner.go:130] >                 "ipam": {},
	I1217 10:39:57.722856 2968376 command_runner.go:130] >                 "type": "loopback"
	I1217 10:39:57.722860 2968376 command_runner.go:130] >               },
	I1217 10:39:57.722866 2968376 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1217 10:39:57.722869 2968376 command_runner.go:130] >             }
	I1217 10:39:57.722873 2968376 command_runner.go:130] >           ],
	I1217 10:39:57.722882 2968376 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1217 10:39:57.722886 2968376 command_runner.go:130] >         },
	I1217 10:39:57.722893 2968376 command_runner.go:130] >         "IFName": "lo"
	I1217 10:39:57.722896 2968376 command_runner.go:130] >       }
	I1217 10:39:57.722899 2968376 command_runner.go:130] >     ],
	I1217 10:39:57.722908 2968376 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1217 10:39:57.722912 2968376 command_runner.go:130] >     "PluginDirs": [
	I1217 10:39:57.722915 2968376 command_runner.go:130] >       "/opt/cni/bin"
	I1217 10:39:57.722919 2968376 command_runner.go:130] >     ],
	I1217 10:39:57.722923 2968376 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1217 10:39:57.722926 2968376 command_runner.go:130] >     "Prefix": "eth"
	I1217 10:39:57.722930 2968376 command_runner.go:130] >   },
	I1217 10:39:57.722933 2968376 command_runner.go:130] >   "config": {
	I1217 10:39:57.722936 2968376 command_runner.go:130] >     "cdiSpecDirs": [
	I1217 10:39:57.722940 2968376 command_runner.go:130] >       "/etc/cdi",
	I1217 10:39:57.722944 2968376 command_runner.go:130] >       "/var/run/cdi"
	I1217 10:39:57.722948 2968376 command_runner.go:130] >     ],
	I1217 10:39:57.722952 2968376 command_runner.go:130] >     "cni": {
	I1217 10:39:57.722955 2968376 command_runner.go:130] >       "binDir": "",
	I1217 10:39:57.722959 2968376 command_runner.go:130] >       "binDirs": [
	I1217 10:39:57.722962 2968376 command_runner.go:130] >         "/opt/cni/bin"
	I1217 10:39:57.722965 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.722969 2968376 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1217 10:39:57.722973 2968376 command_runner.go:130] >       "confTemplate": "",
	I1217 10:39:57.722983 2968376 command_runner.go:130] >       "ipPref": "",
	I1217 10:39:57.722986 2968376 command_runner.go:130] >       "maxConfNum": 1,
	I1217 10:39:57.722991 2968376 command_runner.go:130] >       "setupSerially": false,
	I1217 10:39:57.722995 2968376 command_runner.go:130] >       "useInternalLoopback": false
	I1217 10:39:57.722998 2968376 command_runner.go:130] >     },
	I1217 10:39:57.723004 2968376 command_runner.go:130] >     "containerd": {
	I1217 10:39:57.723008 2968376 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1217 10:39:57.723013 2968376 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1217 10:39:57.723017 2968376 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1217 10:39:57.723021 2968376 command_runner.go:130] >       "runtimes": {
	I1217 10:39:57.723024 2968376 command_runner.go:130] >         "runc": {
	I1217 10:39:57.723029 2968376 command_runner.go:130] >           "ContainerAnnotations": null,
	I1217 10:39:57.723033 2968376 command_runner.go:130] >           "PodAnnotations": null,
	I1217 10:39:57.723038 2968376 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1217 10:39:57.723046 2968376 command_runner.go:130] >           "cgroupWritable": false,
	I1217 10:39:57.723050 2968376 command_runner.go:130] >           "cniConfDir": "",
	I1217 10:39:57.723054 2968376 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1217 10:39:57.723058 2968376 command_runner.go:130] >           "io_type": "",
	I1217 10:39:57.723061 2968376 command_runner.go:130] >           "options": {
	I1217 10:39:57.723065 2968376 command_runner.go:130] >             "BinaryName": "",
	I1217 10:39:57.723069 2968376 command_runner.go:130] >             "CriuImagePath": "",
	I1217 10:39:57.723074 2968376 command_runner.go:130] >             "CriuWorkPath": "",
	I1217 10:39:57.723077 2968376 command_runner.go:130] >             "IoGid": 0,
	I1217 10:39:57.723081 2968376 command_runner.go:130] >             "IoUid": 0,
	I1217 10:39:57.723085 2968376 command_runner.go:130] >             "NoNewKeyring": false,
	I1217 10:39:57.723089 2968376 command_runner.go:130] >             "Root": "",
	I1217 10:39:57.723092 2968376 command_runner.go:130] >             "ShimCgroup": "",
	I1217 10:39:57.723096 2968376 command_runner.go:130] >             "SystemdCgroup": false
	I1217 10:39:57.723100 2968376 command_runner.go:130] >           },
	I1217 10:39:57.723105 2968376 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1217 10:39:57.723111 2968376 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1217 10:39:57.723115 2968376 command_runner.go:130] >           "runtimePath": "",
	I1217 10:39:57.723120 2968376 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1217 10:39:57.723124 2968376 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1217 10:39:57.723128 2968376 command_runner.go:130] >           "snapshotter": ""
	I1217 10:39:57.723131 2968376 command_runner.go:130] >         }
	I1217 10:39:57.723134 2968376 command_runner.go:130] >       }
	I1217 10:39:57.723136 2968376 command_runner.go:130] >     },
	I1217 10:39:57.723146 2968376 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1217 10:39:57.723151 2968376 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1217 10:39:57.723156 2968376 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1217 10:39:57.723161 2968376 command_runner.go:130] >     "disableApparmor": false,
	I1217 10:39:57.723166 2968376 command_runner.go:130] >     "disableHugetlbController": true,
	I1217 10:39:57.723170 2968376 command_runner.go:130] >     "disableProcMount": false,
	I1217 10:39:57.723174 2968376 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1217 10:39:57.723177 2968376 command_runner.go:130] >     "enableCDI": true,
	I1217 10:39:57.723181 2968376 command_runner.go:130] >     "enableSelinux": false,
	I1217 10:39:57.723188 2968376 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1217 10:39:57.723195 2968376 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1217 10:39:57.723200 2968376 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1217 10:39:57.723204 2968376 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1217 10:39:57.723208 2968376 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1217 10:39:57.723212 2968376 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1217 10:39:57.723216 2968376 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1217 10:39:57.723222 2968376 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1217 10:39:57.723226 2968376 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1217 10:39:57.723231 2968376 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1217 10:39:57.723236 2968376 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1217 10:39:57.723241 2968376 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1217 10:39:57.723243 2968376 command_runner.go:130] >   },
	I1217 10:39:57.723247 2968376 command_runner.go:130] >   "features": {
	I1217 10:39:57.723251 2968376 command_runner.go:130] >     "supplemental_groups_policy": true
	I1217 10:39:57.723254 2968376 command_runner.go:130] >   },
	I1217 10:39:57.723257 2968376 command_runner.go:130] >   "golang": "go1.24.9",
	I1217 10:39:57.723267 2968376 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1217 10:39:57.723277 2968376 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1217 10:39:57.723281 2968376 command_runner.go:130] >   "runtimeHandlers": [
	I1217 10:39:57.723283 2968376 command_runner.go:130] >     {
	I1217 10:39:57.723287 2968376 command_runner.go:130] >       "features": {
	I1217 10:39:57.723291 2968376 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1217 10:39:57.723297 2968376 command_runner.go:130] >         "user_namespaces": true
	I1217 10:39:57.723299 2968376 command_runner.go:130] >       }
	I1217 10:39:57.723302 2968376 command_runner.go:130] >     },
	I1217 10:39:57.723305 2968376 command_runner.go:130] >     {
	I1217 10:39:57.723308 2968376 command_runner.go:130] >       "features": {
	I1217 10:39:57.723315 2968376 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1217 10:39:57.723319 2968376 command_runner.go:130] >         "user_namespaces": true
	I1217 10:39:57.723322 2968376 command_runner.go:130] >       },
	I1217 10:39:57.723326 2968376 command_runner.go:130] >       "name": "runc"
	I1217 10:39:57.723328 2968376 command_runner.go:130] >     }
	I1217 10:39:57.723335 2968376 command_runner.go:130] >   ],
	I1217 10:39:57.723338 2968376 command_runner.go:130] >   "status": {
	I1217 10:39:57.723342 2968376 command_runner.go:130] >     "conditions": [
	I1217 10:39:57.723345 2968376 command_runner.go:130] >       {
	I1217 10:39:57.723348 2968376 command_runner.go:130] >         "message": "",
	I1217 10:39:57.723352 2968376 command_runner.go:130] >         "reason": "",
	I1217 10:39:57.723356 2968376 command_runner.go:130] >         "status": true,
	I1217 10:39:57.723361 2968376 command_runner.go:130] >         "type": "RuntimeReady"
	I1217 10:39:57.723364 2968376 command_runner.go:130] >       },
	I1217 10:39:57.723367 2968376 command_runner.go:130] >       {
	I1217 10:39:57.723373 2968376 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1217 10:39:57.723378 2968376 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1217 10:39:57.723382 2968376 command_runner.go:130] >         "status": false,
	I1217 10:39:57.723386 2968376 command_runner.go:130] >         "type": "NetworkReady"
	I1217 10:39:57.723389 2968376 command_runner.go:130] >       },
	I1217 10:39:57.723391 2968376 command_runner.go:130] >       {
	I1217 10:39:57.723414 2968376 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1217 10:39:57.723421 2968376 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1217 10:39:57.723426 2968376 command_runner.go:130] >         "status": false,
	I1217 10:39:57.723432 2968376 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1217 10:39:57.723434 2968376 command_runner.go:130] >       }
	I1217 10:39:57.723437 2968376 command_runner.go:130] >     ]
	I1217 10:39:57.723440 2968376 command_runner.go:130] >   }
	I1217 10:39:57.723442 2968376 command_runner.go:130] > }
	I1217 10:39:57.726093 2968376 cni.go:84] Creating CNI manager for ""
	I1217 10:39:57.726119 2968376 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 10:39:57.726139 2968376 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 10:39:57.726166 2968376 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-232588 NodeName:functional-232588 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 10:39:57.726283 2968376 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-232588"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 10:39:57.726359 2968376 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 10:39:57.733320 2968376 command_runner.go:130] > kubeadm
	I1217 10:39:57.733342 2968376 command_runner.go:130] > kubectl
	I1217 10:39:57.733347 2968376 command_runner.go:130] > kubelet
	I1217 10:39:57.734253 2968376 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 10:39:57.734351 2968376 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 10:39:57.741900 2968376 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1217 10:39:57.754718 2968376 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 10:39:57.767131 2968376 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1217 10:39:57.780328 2968376 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 10:39:57.783968 2968376 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1217 10:39:57.784263 2968376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 10:39:57.891500 2968376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 10:39:58.252332 2968376 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588 for IP: 192.168.49.2
	I1217 10:39:58.252409 2968376 certs.go:195] generating shared ca certs ...
	I1217 10:39:58.252461 2968376 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:39:58.252670 2968376 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 10:39:58.252752 2968376 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 10:39:58.252788 2968376 certs.go:257] generating profile certs ...
	I1217 10:39:58.252943 2968376 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.key
	I1217 10:39:58.253053 2968376 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key.a39919a0
	I1217 10:39:58.253133 2968376 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key
	I1217 10:39:58.253172 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 10:39:58.253214 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 10:39:58.253260 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 10:39:58.253294 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 10:39:58.253341 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 10:39:58.253377 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 10:39:58.253421 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 10:39:58.253456 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 10:39:58.253577 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 10:39:58.253658 2968376 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 10:39:58.253688 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 10:39:58.253756 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 10:39:58.253819 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 10:39:58.253883 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 10:39:58.253975 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 10:39:58.254044 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.254093 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem -> /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.254126 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.254782 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 10:39:58.276977 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 10:39:58.300224 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 10:39:58.319429 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 10:39:58.338203 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 10:39:58.355898 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 10:39:58.373473 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 10:39:58.391528 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 10:39:58.408858 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 10:39:58.426819 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 10:39:58.444926 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 10:39:58.462979 2968376 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 10:39:58.476114 2968376 ssh_runner.go:195] Run: openssl version
	I1217 10:39:58.483093 2968376 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 10:39:58.483240 2968376 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.490661 2968376 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 10:39:58.498193 2968376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.502204 2968376 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.502289 2968376 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.502352 2968376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.543361 2968376 command_runner.go:130] > b5213941
	I1217 10:39:58.543894 2968376 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 10:39:58.551548 2968376 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.559110 2968376 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 10:39:58.567064 2968376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.570982 2968376 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.571071 2968376 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.571149 2968376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.611772 2968376 command_runner.go:130] > 51391683
	I1217 10:39:58.612217 2968376 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 10:39:58.619901 2968376 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.627496 2968376 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 10:39:58.635170 2968376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.639161 2968376 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.639286 2968376 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.639343 2968376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.679963 2968376 command_runner.go:130] > 3ec20f2e
	I1217 10:39:58.680491 2968376 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 10:39:58.687873 2968376 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 10:39:58.691452 2968376 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 10:39:58.691483 2968376 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1217 10:39:58.691491 2968376 command_runner.go:130] > Device: 259,1	Inode: 3648630     Links: 1
	I1217 10:39:58.691498 2968376 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 10:39:58.691503 2968376 command_runner.go:130] > Access: 2025-12-17 10:35:51.067485305 +0000
	I1217 10:39:58.691508 2968376 command_runner.go:130] > Modify: 2025-12-17 10:31:46.445154139 +0000
	I1217 10:39:58.691513 2968376 command_runner.go:130] > Change: 2025-12-17 10:31:46.445154139 +0000
	I1217 10:39:58.691519 2968376 command_runner.go:130] >  Birth: 2025-12-17 10:31:46.445154139 +0000
	I1217 10:39:58.691792 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 10:39:58.732576 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.733078 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 10:39:58.773416 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.773947 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 10:39:58.814511 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.815058 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 10:39:58.855809 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.856437 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 10:39:58.897493 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.897637 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 10:39:58.937941 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.938362 2968376 kubeadm.go:401] StartCluster: {Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:39:58.938478 2968376 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 10:39:58.938558 2968376 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 10:39:58.967095 2968376 cri.go:89] found id: ""
	I1217 10:39:58.967172 2968376 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 10:39:58.974207 2968376 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1217 10:39:58.974232 2968376 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1217 10:39:58.974239 2968376 command_runner.go:130] > /var/lib/minikube/etcd:
	I1217 10:39:58.975124 2968376 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 10:39:58.975142 2968376 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 10:39:58.975194 2968376 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 10:39:58.982722 2968376 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 10:39:58.983159 2968376 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-232588" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:39:58.983280 2968376 kubeconfig.go:62] /home/jenkins/minikube-integration/22182-2922712/kubeconfig needs updating (will repair): [kubeconfig missing "functional-232588" cluster setting kubeconfig missing "functional-232588" context setting]
	I1217 10:39:58.983551 2968376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:39:58.984002 2968376 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:39:58.984156 2968376 kapi.go:59] client config for functional-232588: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt", KeyFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.key", CAFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 10:39:58.984706 2968376 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 10:39:58.984730 2968376 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 10:39:58.984737 2968376 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 10:39:58.984745 2968376 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 10:39:58.984756 2968376 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 10:39:58.984794 2968376 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 10:39:58.985054 2968376 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 10:39:58.992764 2968376 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 10:39:58.992810 2968376 kubeadm.go:602] duration metric: took 17.660629ms to restartPrimaryControlPlane
	I1217 10:39:58.992820 2968376 kubeadm.go:403] duration metric: took 54.467316ms to StartCluster
	I1217 10:39:58.992834 2968376 settings.go:142] acquiring lock: {Name:mkbe14d68dd5ff3fa1749157c50305d115f5a24d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:39:58.992909 2968376 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:39:58.993526 2968376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:39:58.993746 2968376 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 10:39:58.994170 2968376 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 10:39:58.994219 2968376 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 10:39:58.994288 2968376 addons.go:70] Setting storage-provisioner=true in profile "functional-232588"
	I1217 10:39:58.994301 2968376 addons.go:239] Setting addon storage-provisioner=true in "functional-232588"
	I1217 10:39:58.994329 2968376 host.go:66] Checking if "functional-232588" exists ...
	I1217 10:39:58.994354 2968376 addons.go:70] Setting default-storageclass=true in profile "functional-232588"
	I1217 10:39:58.994416 2968376 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-232588"
	I1217 10:39:58.994775 2968376 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:39:58.994809 2968376 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:39:59.000060 2968376 out.go:179] * Verifying Kubernetes components...
	I1217 10:39:59.002988 2968376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 10:39:59.030107 2968376 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:39:59.030278 2968376 kapi.go:59] client config for functional-232588: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt", KeyFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.key", CAFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 10:39:59.030548 2968376 addons.go:239] Setting addon default-storageclass=true in "functional-232588"
	I1217 10:39:59.030583 2968376 host.go:66] Checking if "functional-232588" exists ...
	I1217 10:39:59.030999 2968376 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:39:59.046619 2968376 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 10:39:59.049547 2968376 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:39:59.049578 2968376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 10:39:59.049652 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:59.071122 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:59.078111 2968376 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 10:39:59.078138 2968376 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 10:39:59.078204 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:59.106268 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:59.210035 2968376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 10:39:59.247804 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:39:59.250104 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:00.029975 2968376 node_ready.go:35] waiting up to 6m0s for node "functional-232588" to be "Ready" ...
	I1217 10:40:00.030121 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:00.030183 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:00.030443 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.030485 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.030522 2968376 retry.go:31] will retry after 293.620925ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.030561 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.030575 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.030582 2968376 retry.go:31] will retry after 156.365506ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.030650 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:00.188354 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:00.324847 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:00.436532 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.436662 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.436836 2968376 retry.go:31] will retry after 279.814099ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.516954 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.518501 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.518555 2968376 retry.go:31] will retry after 262.10287ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.531577 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:00.531724 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:00.533353 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 10:40:00.717812 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:00.781511 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:00.801403 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.801643 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.801671 2968376 retry.go:31] will retry after 799.844048ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.868602 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.868642 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.868698 2968376 retry.go:31] will retry after 554.70169ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:01.031171 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:01.031268 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:01.031636 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:01.424206 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:01.486829 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:01.486884 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:01.486903 2968376 retry.go:31] will retry after 534.910165ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:01.531036 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:01.531190 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:01.531514 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:01.601938 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:01.666361 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:01.666415 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:01.666435 2968376 retry.go:31] will retry after 494.63938ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:02.022963 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:02.030812 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:02.030945 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:02.031372 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:02.031439 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:02.093352 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:02.093469 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:02.093495 2968376 retry.go:31] will retry after 1.147395482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:02.161756 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:02.224785 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:02.224835 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:02.224873 2968376 retry.go:31] will retry after 722.380129ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:02.530243 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:02.530335 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:02.530682 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:02.948277 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:03.019220 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:03.023774 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:03.023820 2968376 retry.go:31] will retry after 1.527910453s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:03.031105 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:03.031182 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:03.031525 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:03.241898 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:03.304153 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:03.304205 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:03.304227 2968376 retry.go:31] will retry after 2.808262652s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:03.530353 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:03.530425 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:03.530767 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:04.030262 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:04.030340 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:04.030662 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:04.530190 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:04.530267 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:04.530613 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:04.530682 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:04.552783 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:04.614277 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:04.618634 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:04.618671 2968376 retry.go:31] will retry after 1.686088172s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:05.031243 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:05.031319 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:05.031611 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:05.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:05.530314 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:05.530636 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:06.030216 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:06.030295 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:06.030584 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:06.113005 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:06.174987 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:06.175028 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:06.175048 2968376 retry.go:31] will retry after 2.620064864s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:06.305352 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:06.366722 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:06.366771 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:06.366790 2968376 retry.go:31] will retry after 6.20410258s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:06.531098 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:06.531170 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:06.531510 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:06.531566 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:07.030285 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:07.030361 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:07.030703 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:07.530195 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:07.530269 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:07.530540 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:08.030245 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:08.030326 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:08.030668 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:08.530335 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:08.530413 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:08.530732 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:08.796304 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:08.853426 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:08.857034 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:08.857067 2968376 retry.go:31] will retry after 3.174722269s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:09.030586 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:09.030666 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:09.031008 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:09.031064 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:09.530804 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:09.530879 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:09.531204 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:10.031140 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:10.031218 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:10.031521 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:10.530229 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:10.530312 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:10.530588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:11.030272 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:11.030347 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:11.030674 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:11.530355 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:11.530450 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:11.530745 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:11.530788 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:12.030183 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:12.030259 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:12.030568 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:12.032754 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:12.104534 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:12.104594 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:12.104617 2968376 retry.go:31] will retry after 7.427014064s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:12.531116 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:12.531194 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:12.531510 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:12.571824 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:12.627783 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:12.631439 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:12.631473 2968376 retry.go:31] will retry after 5.673499761s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:13.031007 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:13.031079 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:13.031447 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:13.530133 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:13.530207 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:13.530473 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:14.030881 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:14.030963 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:14.031294 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:14.031348 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:14.531063 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:14.531139 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:14.531511 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:15.030415 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:15.030505 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:15.030865 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:15.530246 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:15.530327 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:15.530615 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:16.030257 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:16.030335 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:16.030683 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:16.530343 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:16.530412 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:16.530735 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:16.530792 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:17.030348 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:17.030438 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:17.030746 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:17.530427 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:17.530508 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:17.530854 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:18.031138 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:18.031239 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:18.031524 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:18.306153 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:18.363523 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:18.367149 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:18.367184 2968376 retry.go:31] will retry after 11.676089788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:18.530483 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:18.530628 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:18.530998 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:18.531054 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:19.031060 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:19.031138 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:19.031447 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:19.530144 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:19.530217 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:19.530501 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:19.532780 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:19.596086 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:19.596134 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:19.596153 2968376 retry.go:31] will retry after 6.09625298s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:20.031102 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:20.031251 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:20.031747 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:20.530743 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:20.530896 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:20.531474 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:20.531549 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:21.030954 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:21.031034 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:21.031324 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:21.531097 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:21.531170 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:21.531522 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:22.030145 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:22.030232 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:22.030617 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:22.530952 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:22.531023 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:22.531286 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:23.031049 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:23.031121 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:23.031488 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:23.031552 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:23.531151 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:23.531233 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:23.531594 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:24.030205 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:24.030271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:24.030618 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:24.530297 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:24.530374 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:24.530704 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:25.030556 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:25.030634 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:25.031013 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:25.530898 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:25.530990 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:25.531308 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:25.531351 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:25.692701 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:25.761074 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:25.761116 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:25.761134 2968376 retry.go:31] will retry after 8.308022173s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:26.030656 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:26.030736 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:26.031050 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:26.530816 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:26.530887 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:26.531227 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:27.030620 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:27.030689 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:27.030975 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:27.530810 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:27.530882 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:27.531225 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:28.031037 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:28.031121 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:28.031512 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:28.031588 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:28.530251 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:28.530319 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:28.530583 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:29.030525 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:29.030614 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:29.031053 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:29.530775 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:29.530846 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:29.531189 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:30.032544 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:30.032629 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:30.032970 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:30.033031 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:30.044190 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:30.141158 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:30.141207 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:30.141228 2968376 retry.go:31] will retry after 21.251088353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:30.530770 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:30.530848 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:30.531184 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:31.031023 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:31.031097 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:31.031429 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:31.530162 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:31.530338 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:31.530687 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:32.030318 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:32.030410 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:32.030863 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:32.530571 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:32.530648 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:32.531098 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:32.531174 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:33.030920 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:33.031010 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:33.031359 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:33.531147 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:33.531219 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:33.531570 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:34.030334 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:34.030418 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:34.030775 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:34.070045 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:34.128651 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:34.132259 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:34.132293 2968376 retry.go:31] will retry after 23.004999937s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:34.530392 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:34.530466 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:34.530735 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:35.030855 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:35.030938 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:35.031252 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:35.031308 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:35.530763 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:35.530834 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:35.531181 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:36.030980 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:36.031106 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:36.031458 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:36.530826 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:36.530905 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:36.531257 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:37.031180 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:37.031261 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:37.031662 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:37.031754 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:37.530206 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:37.530276 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:37.530649 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:38.030423 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:38.030503 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:38.030854 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:38.530587 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:38.530659 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:38.531005 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:39.030844 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:39.030924 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:39.031203 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:39.531010 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:39.531096 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:39.531446 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:39.531521 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:40.031145 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:40.031231 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:40.031658 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:40.530343 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:40.530420 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:40.530707 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:41.030992 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:41.031064 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:41.031409 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:41.531176 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:41.531252 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:41.531592 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:41.531649 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:42.030335 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:42.030418 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:42.030713 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:42.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:42.530293 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:42.530634 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:43.030224 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:43.030309 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:43.030694 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:43.530392 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:43.530468 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:43.530795 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:44.030259 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:44.030339 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:44.030666 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:44.030720 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:44.530388 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:44.530467 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:44.530803 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:45.030897 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:45.032736 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:45.034090 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 10:40:45.530857 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:45.530936 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:45.531262 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:46.031009 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:46.031081 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:46.031343 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:46.031380 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:46.531073 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:46.531152 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:46.531521 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:47.030170 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:47.030255 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:47.030602 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:47.530303 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:47.530374 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:47.530644 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:48.030323 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:48.030406 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:48.030744 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:48.530500 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:48.530605 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:48.530966 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:48.531023 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:49.030795 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:49.030871 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:49.031172 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:49.530860 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:49.530935 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:49.531267 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:50.031129 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:50.031208 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:50.031548 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:50.530224 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:50.530303 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:50.530574 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:51.030327 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:51.030401 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:51.030749 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:51.030806 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:51.393321 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:51.454332 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:51.458316 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:51.458350 2968376 retry.go:31] will retry after 15.302727777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:51.530571 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:51.530643 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:51.530966 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:52.030247 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:52.030332 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:52.030623 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:52.530219 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:52.530312 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:52.530691 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:53.030289 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:53.030364 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:53.030698 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:53.530380 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:53.530457 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:53.530780 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:53.530833 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:54.030549 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:54.030652 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:54.030947 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:54.530639 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:54.530716 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:54.531043 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:55.030934 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:55.031013 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:55.031455 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:55.531099 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:55.531193 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:55.531510 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:55.531578 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:56.030320 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:56.030398 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:56.030700 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:56.530201 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:56.530273 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:56.530535 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:57.030303 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:57.030373 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:57.030719 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:57.138000 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:57.193212 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:57.197444 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:57.197478 2968376 retry.go:31] will retry after 20.170499035s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:57.530886 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:57.530963 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:57.531316 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:58.031030 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:58.031101 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:58.031459 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:58.031521 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:58.530185 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:58.530279 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:58.530591 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:59.030603 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:59.030673 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:59.031011 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:59.530181 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:59.530254 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:59.530556 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:00.031130 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:00.031217 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:00.031532 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:00.031582 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:00.530232 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:00.530306 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:00.530652 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:01.030118 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:01.030193 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:01.030459 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:01.530122 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:01.530201 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:01.530574 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:02.030319 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:02.030407 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:02.030755 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:02.530195 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:02.530276 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:02.530558 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:02.530607 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:03.030232 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:03.030305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:03.030654 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:03.530352 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:03.530433 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:03.530775 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:04.030460 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:04.030552 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:04.030847 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:04.530572 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:04.530659 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:04.530971 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:04.531027 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:05.031015 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:05.031091 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:05.031381 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:05.531137 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:05.531210 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:05.531480 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:06.030204 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:06.030287 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:06.030672 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:06.530230 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:06.530312 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:06.530661 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:06.762229 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:41:06.820073 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:06.820109 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:41:06.820128 2968376 retry.go:31] will retry after 35.040877283s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:41:07.030604 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:07.030693 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:07.030967 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:07.031017 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:07.530709 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:07.530791 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:07.531216 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:08.030859 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:08.030956 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:08.031280 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:08.531028 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:08.531110 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:08.531372 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:09.030137 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:09.030210 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:09.030518 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:09.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:09.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:09.530639 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:09.530700 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:10.030448 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:10.030530 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:10.030820 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:10.530483 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:10.530577 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:10.530870 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:11.030249 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:11.030346 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:11.030673 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:11.530364 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:11.530439 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:11.530704 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:11.530760 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:12.030256 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:12.030329 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:12.030660 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:12.530205 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:12.530277 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:12.530608 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:13.030312 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:13.030400 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:13.030672 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:13.530341 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:13.530415 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:13.530818 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:13.530880 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:14.030265 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:14.030338 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:14.030678 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:14.530384 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:14.530453 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:14.530771 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:15.030787 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:15.030877 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:15.031291 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:15.531114 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:15.531196 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:15.531528 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:15.531590 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:16.030220 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:16.030296 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:16.030573 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:16.530300 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:16.530383 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:16.530739 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:17.030458 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:17.030539 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:17.030882 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:17.368346 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:41:17.428304 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:17.431873 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:41:17.431904 2968376 retry.go:31] will retry after 38.363968078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:41:17.531154 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:17.531231 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:17.531502 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:18.030234 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:18.030352 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:18.030774 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:18.030859 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:18.530515 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:18.530607 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:18.530942 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:19.030903 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:19.030980 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:19.031301 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:19.530780 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:19.530855 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:19.531233 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:20.031004 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:20.031081 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:20.031456 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:20.031515 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:20.530158 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:20.530242 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:20.530554 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:21.030247 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:21.030339 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:21.030668 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:21.530378 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:21.530474 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:21.530782 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:22.030443 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:22.030541 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:22.030864 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:22.530263 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:22.530337 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:22.530672 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:22.530725 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:23.030389 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:23.030466 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:23.030819 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:23.530513 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:23.530591 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:23.530877 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:24.030399 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:24.030471 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:24.030823 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:24.530238 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:24.530307 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:24.530641 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:25.030797 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:25.030872 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:25.031158 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:25.031215 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:25.530943 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:25.531023 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:25.531343 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:26.031163 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:26.031243 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:26.031563 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:26.530221 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:26.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:26.530613 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:27.030270 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:27.030340 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:27.030646 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:27.530262 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:27.530344 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:27.530672 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:27.530735 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:28.030374 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:28.030450 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:28.030789 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:28.530216 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:28.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:28.530634 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:29.030406 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:29.030498 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:29.030839 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:29.530201 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:29.530270 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:29.530587 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:30.030598 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:30.030723 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:30.031102 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:30.031168 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:30.530947 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:30.531019 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:30.531339 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:31.031114 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:31.031177 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:31.031431 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:31.531219 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:31.531303 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:31.531630 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:32.030207 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:32.030291 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:32.030641 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:32.530182 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:32.530258 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:32.530539 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:32.530590 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:33.030266 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:33.030347 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:33.030749 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:33.530421 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:33.530501 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:33.530853 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:34.030212 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:34.030287 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:34.030655 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:34.530281 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:34.530361 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:34.530710 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:34.530814 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:35.030611 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:35.030685 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:35.031034 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:35.530333 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:35.530402 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:35.530717 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:36.030300 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:36.030388 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:36.030750 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:36.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:36.530286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:36.530608 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:37.030333 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:37.030420 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:37.030757 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:37.030808 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:37.530515 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:37.530586 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:37.530947 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:38.030773 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:38.030848 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:38.031196 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:38.530440 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:38.530514 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:38.530796 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:39.030742 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:39.030829 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:39.031155 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:39.031225 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:39.530944 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:39.531015 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:39.531346 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:40.031118 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:40.031205 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:40.031497 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:40.530158 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:40.530248 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:40.530609 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:41.030336 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:41.030425 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:41.030757 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:41.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:41.530274 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:41.530533 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:41.530571 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:41.862178 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:41:41.923706 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:41.923759 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:41.923872 2968376 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 10:41:42.031033 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:42.031113 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:42.031454 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:42.530179 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:42.530261 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:42.530593 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:43.030183 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:43.030252 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:43.030559 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:43.530235 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:43.530318 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:43.530640 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:43.530702 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:44.030269 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:44.030350 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:44.030675 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:44.530279 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:44.530349 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:44.530638 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:45.030563 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:45.030653 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:45.031039 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:45.530936 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:45.531014 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:45.531379 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:45.531437 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:46.030694 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:46.030768 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:46.031031 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:46.530498 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:46.530586 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:46.530955 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:47.030756 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:47.030830 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:47.031181 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:47.530955 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:47.531021 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:47.531344 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:48.031149 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:48.031228 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:48.031596 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:48.031656 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:48.530229 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:48.530311 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:48.530650 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:49.030360 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:49.030435 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:49.030716 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:49.530373 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:49.530448 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:49.530795 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:50.030856 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:50.030945 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:50.031309 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:50.531043 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:50.531111 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:50.531380 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:50.531421 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:51.030173 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:51.030254 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:51.030586 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:51.530279 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:51.530362 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:51.530687 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:52.030399 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:52.030478 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:52.030876 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:52.530578 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:52.530651 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:52.531025 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:53.030854 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:53.030938 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:53.031290 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:53.031364 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:53.531095 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:53.531182 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:53.531545 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:54.030257 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:54.030339 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:54.030711 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:54.530208 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:54.530286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:54.534934 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 10:41:55.030911 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:55.031006 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:55.031280 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:55.530658 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:55.530757 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:55.531092 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:55.531146 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:55.796501 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:41:55.858122 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:55.858175 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:55.858259 2968376 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 10:41:55.863014 2968376 out.go:179] * Enabled addons: 
	I1217 10:41:55.865747 2968376 addons.go:530] duration metric: took 1m56.871522842s for enable addons: enabled=[]
	I1217 10:41:56.030483 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:56.030561 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:56.030907 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:56.530592 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:56.530668 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:56.530973 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:57.030256 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:57.030336 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:57.030717 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:57.530234 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:57.530308 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:57.530641 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:58.033611 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:58.033711 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:58.033996 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:58.034053 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:58.530276 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:58.530381 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:58.530759 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:59.030773 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:59.030845 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:59.031207 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:59.531008 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:59.531115 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:59.531404 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:00.030325 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:00.030471 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:00.030856 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:00.530785 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:00.530901 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:00.531226 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:00.531288 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:01.030976 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:01.031043 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:01.031299 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:01.531109 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:01.531187 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:01.531522 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:02.030231 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:02.030334 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:02.030695 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:02.530421 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:02.530490 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:02.530829 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:03.030527 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:03.030623 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:03.030985 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:03.031044 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:03.530805 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:03.530890 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:03.531241 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:04.030644 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:04.030719 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:04.031014 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:04.530743 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:04.530821 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:04.531126 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:05.030982 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:05.031061 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:05.031449 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:05.031509 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:05.530161 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:05.530231 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:05.530503 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:06.030268 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:06.030373 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:06.030811 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:06.530502 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:06.530577 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:06.530933 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:07.030646 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:07.030722 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:07.031021 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:07.530377 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:07.530455 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:07.530792 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:07.530847 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:08.030509 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:08.030589 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:08.030943 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:08.530625 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:08.530698 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:08.530961 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:09.030865 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:09.030938 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:09.031271 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:09.531064 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:09.531145 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:09.531546 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:09.531604 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:10.030184 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:10.030265 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:10.030604 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:10.530301 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:10.530388 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:10.530737 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:11.030319 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:11.030395 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:11.030731 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:11.530208 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:11.530286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:11.530559 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:12.030254 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:12.030339 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:12.030671 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:12.030728 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:12.530216 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:12.530298 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:12.530650 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:13.030199 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:13.030289 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:13.030609 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:13.530174 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:13.530251 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:13.530593 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:14.030325 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:14.030403 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:14.030742 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:14.030815 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:14.530197 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:14.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:14.530595 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:15.030677 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:15.030769 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:15.031176 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:15.530953 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:15.531037 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:15.531372 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:16.030632 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:16.030705 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:16.031041 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:16.031095 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:16.530824 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:16.530899 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:16.531227 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:17.031078 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:17.031158 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:17.031507 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:17.530212 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:17.530279 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:17.530603 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:18.030263 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:18.030383 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:18.030909 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:18.530209 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:18.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:18.530632 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:18.530733 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:19.030297 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:19.030368 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:19.030628 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:19.530369 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:19.530456 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:19.530831 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:20.030743 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:20.030860 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:20.031293 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:20.531060 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:20.531143 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:20.531416 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:20.531464 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:21.030175 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:21.030250 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:21.030599 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:21.530297 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:21.530372 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:21.530710 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:22.030207 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:22.030288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:22.030595 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:22.530236 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:22.530323 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:22.530665 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:23.030258 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:23.030337 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:23.030699 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:23.030756 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:23.530408 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:23.530481 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:23.530771 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:24.030482 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:24.030556 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:24.030886 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:24.530220 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:24.530300 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:24.530624 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:25.030607 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:25.030684 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:25.030955 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:25.030997 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:25.530792 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:25.530868 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:25.531224 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:26.031041 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:26.031118 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:26.031467 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:26.530812 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:26.530894 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:26.531163 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:27.030944 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:27.031023 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:27.031350 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:27.031410 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:27.531121 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:27.531199 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:27.531551 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:28.030233 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:28.030315 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:28.030645 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:28.530224 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:28.530306 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:28.530690 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:29.030400 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:29.030474 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:29.030789 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:29.530188 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:29.530261 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:29.530523 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:29.530575 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:30.030527 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:30.030608 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:30.030914 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:30.530157 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:30.530235 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:30.530505 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:31.030212 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:31.030285 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:31.030567 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:31.530218 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:31.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:31.530647 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:31.530702 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:32.030406 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:32.030487 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:32.030812 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:32.530487 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:32.530568 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:32.530890 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:33.030273 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:33.030372 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:33.030696 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:33.530243 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:33.530324 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:33.530662 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:34.030960 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:34.031034 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:34.031331 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:34.031398 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:34.531153 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:34.531229 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:34.531528 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:35.031148 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:35.031227 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:35.031548 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:35.530192 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:35.530297 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:35.530620 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:36.030260 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:36.030343 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:36.030695 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:36.530255 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:36.530338 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:36.530702 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:36.530761 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:37.030424 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:37.030507 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:37.030895 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:37.530587 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:37.530660 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:37.530982 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:38.030793 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:38.030877 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:38.031209 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:38.530652 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:38.530746 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:38.531014 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:38.531064 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:39.030920 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:39.030999 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:39.031358 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:39.530863 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:39.530944 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:39.531269 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:40.033571 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:40.033646 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:40.033999 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:40.530772 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:40.530895 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:40.531207 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:40.531255 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:41.030902 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:41.030976 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:41.031290 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:41.530836 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:41.530913 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:41.531177 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:42.031028 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:42.031104 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:42.031506 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:42.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:42.530290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:42.530602 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:43.030257 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:43.030326 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:43.030592 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:43.030635 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:43.530202 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:43.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:43.530608 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:44.030263 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:44.030345 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:44.030692 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:44.530382 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:44.530453 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:44.530758 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:45.030634 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:45.030711 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:45.031045 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:45.031092 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:45.530980 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:45.531053 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:45.531405 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:46.030984 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:46.031075 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:46.031347 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:46.531101 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:46.531172 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:46.531490 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:47.030218 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:47.030298 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:47.030674 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:47.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:47.530250 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:47.530516 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:47.530556 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:48.030265 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:48.030342 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:48.030699 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:48.530227 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:48.530301 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:48.530655 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:49.030362 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:49.030433 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:49.030712 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:49.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:49.530274 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:49.530611 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:49.530667 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:50.030450 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:50.030536 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:50.030902 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:50.530566 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:50.530643 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:50.530924 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:51.030624 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:51.030697 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:51.031040 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:51.530799 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:51.530874 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:51.531195 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:51.531260 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:52.030967 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:52.031041 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:52.031382 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:52.530130 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:52.530238 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:52.530576 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:53.030280 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:53.030358 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:53.030697 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:53.530178 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:53.530255 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:53.530583 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:54.030277 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:54.030377 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:54.030696 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:54.030750 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:54.530411 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:54.530489 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:54.530806 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:55.030697 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:55.030769 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:55.031047 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:55.530468 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:55.530547 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:55.530914 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:56.030257 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:56.030332 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:56.030675 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:56.530365 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:56.530439 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:56.530709 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:56.530760 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:57.030475 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:57.030547 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:57.030868 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:57.530563 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:57.530633 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:57.530984 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:58.030672 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:58.030747 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:58.031048 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:58.530818 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:58.530891 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:58.531173 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:58.531217 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:59.030892 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:59.030968 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:59.031306 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:59.531057 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:59.531132 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:59.531418 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:00.031208 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:00.031315 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:00.031840 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:00.530199 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:00.530272 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:00.530592 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:01.030181 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:01.030256 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:01.030519 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:01.030564 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:01.530209 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:01.530280 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:01.530610 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:02.030330 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:02.030414 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:02.030762 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:02.530189 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:02.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:02.530545 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:03.030260 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:03.030353 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:03.030693 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:03.030751 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:03.530210 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:03.530286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:03.530616 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:04.030191 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:04.030264 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:04.030560 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:04.530232 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:04.530306 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:04.530651 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:05.030671 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:05.030756 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:05.031092 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:05.031143 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:05.530868 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:05.530936 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:05.531271 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:06.031105 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:06.031190 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:06.031557 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:06.530208 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:06.530279 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:06.530617 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:07.030293 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:07.030367 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:07.030644 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:07.530306 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:07.530387 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:07.530723 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:07.530782 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:08.030495 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:08.030574 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:08.030934 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:08.530198 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:08.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:08.530588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:09.030617 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:09.030710 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:09.031007 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:09.530235 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:09.530313 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:09.530669 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:10.030528 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:10.030602 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:10.030907 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:10.030956 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:10.530627 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:10.530703 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:10.531097 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:11.030974 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:11.031061 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:11.031452 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:11.530139 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:11.530212 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:11.530499 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:12.030241 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:12.030318 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:12.030668 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:12.530367 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:12.530446 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:12.530785 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:12.530838 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:13.030512 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:13.030590 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:13.030926 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:13.530231 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:13.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:13.530675 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:14.030433 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:14.030532 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:14.030898 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:14.530191 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:14.530265 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:14.530525 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:15.030561 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:15.030642 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:15.031035 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:15.031108 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:15.530771 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:15.530851 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:15.531186 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:16.030941 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:16.031058 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:16.031367 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:16.531127 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:16.531204 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:16.531551 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:17.030268 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:17.030359 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:17.030730 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:17.530432 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:17.530503 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:17.530762 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:17.530802 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:18.030287 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:18.030373 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:18.030726 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:18.530417 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:18.530491 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:18.530823 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:19.030615 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:19.030686 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:19.030957 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:19.530751 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:19.530822 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:19.531145 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:19.531219 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:20.030996 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:20.031081 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:20.031466 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:20.530134 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:20.530218 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:20.530480 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:21.030224 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:21.030315 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:21.030716 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:21.530405 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:21.530495 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:21.530849 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:22.030215 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:22.030290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:22.030563 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:22.030612 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:22.530260 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:22.530336 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:22.530660 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:23.030248 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:23.030322 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:23.030668 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:23.530351 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:23.530432 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:23.530727 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:24.030221 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:24.030298 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:24.030639 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:24.030700 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:24.530199 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:24.530274 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:24.530592 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:25.030528 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:25.030601 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:25.030897 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:25.530568 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:25.530644 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:25.531019 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:26.030843 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:26.030932 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:26.031265 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:26.031321 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:26.531018 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:26.531091 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:26.531351 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:27.031180 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:27.031252 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:27.031581 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:27.530252 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:27.530331 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:27.530667 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:28.030211 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:28.030284 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:28.030564 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:28.530284 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:28.530361 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:28.530659 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:28.530707 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:29.030650 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:29.030721 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:29.031052 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:29.530749 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:29.530823 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:29.531149 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:30.031046 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:30.031137 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:30.031519 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:30.530216 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:30.530313 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:30.530684 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:30.530743 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:31.030206 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:31.030288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:31.030560 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:31.530234 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:31.530321 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:31.530704 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:32.030432 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:32.030511 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:32.030861 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:32.530557 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:32.530633 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:32.530986 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:32.531075 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:33.030858 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:33.030935 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:33.031277 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:33.531109 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:33.531187 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:33.531577 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:34.030306 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:34.030382 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:34.030753 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:34.530223 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:34.530319 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:34.530708 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:35.030567 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:35.030667 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:35.031054 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:35.031114 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:35.530355 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:35.530425 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:35.530748 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:36.030259 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:36.030360 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:36.030744 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:36.530439 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:36.530514 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:36.530839 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:37.030151 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:37.030234 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:37.030595 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:37.530363 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:37.530442 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:37.530778 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:37.530853 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:38.030262 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:38.030348 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:38.030702 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:38.530390 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:38.530461 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:38.530733 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:39.030687 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:39.030760 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:39.031111 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:39.530923 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:39.531001 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:39.531339 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:39.531397 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:40.030955 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:40.031030 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:40.031319 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:40.531063 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:40.531139 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:40.531495 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:41.031163 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:41.031238 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:41.031591 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:41.530249 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:41.530323 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:41.530587 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:42.030362 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:42.030453 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:42.030854 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:42.030924 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:42.530577 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:42.530657 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:42.531023 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:43.030790 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:43.030866 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:43.031190 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:43.530930 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:43.531021 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:43.531357 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:44.031028 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:44.031107 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:44.031450 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:44.031512 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:44.530164 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:44.530233 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:44.530544 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:45.031170 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:45.031261 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:45.031590 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:45.530211 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:45.530293 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:45.530682 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:46.030354 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:46.030422 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:46.030698 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:46.530232 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:46.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:46.530634 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:46.530696 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:47.030366 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:47.030444 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:47.030742 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:47.530404 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:47.530478 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:47.530752 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:48.030496 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:48.030575 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:48.030882 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:48.530214 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:48.530292 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:48.530633 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:49.030376 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:49.030444 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:49.030775 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:49.030832 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:49.530474 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:49.530545 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:49.530877 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:50.030914 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:50.030991 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:50.031360 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:50.531113 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:50.531187 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:50.531458 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:51.030169 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:51.030240 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:51.030588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:51.530249 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:51.530328 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:51.530647 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:51.530704 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:52.030197 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:52.030269 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:52.030691 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:52.530433 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:52.530506 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:52.530821 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:53.030494 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:53.030577 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:53.030964 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:53.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:53.530257 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:53.530535 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:54.030277 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:54.030388 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:54.030914 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:54.030974 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:54.530212 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:54.530304 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:54.530629 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:55.030620 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:55.030692 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:55.030975 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:55.530238 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:55.530313 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:55.530627 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:56.030306 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:56.030401 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:56.030753 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:56.530440 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:56.530509 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:56.530781 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:56.530820 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:57.030493 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:57.030591 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:57.030923 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:57.530749 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:57.530825 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:57.531153 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:58.030917 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:58.030996 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:58.031309 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:58.530552 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:58.530658 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:58.531261 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:58.531332 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:59.031089 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:59.031172 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:59.031521 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:59.530216 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:59.530308 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:59.530593 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:00.030748 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:00.030831 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:00.031142 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:00.530904 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:00.530989 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:00.531375 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:00.531435 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:01.030988 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:01.031055 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:01.031330 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:01.531137 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:01.531217 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:01.531543 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:02.030239 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:02.030312 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:02.030660 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:02.530197 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:02.530277 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:02.530553 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:03.030250 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:03.030323 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:03.030669 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:03.030723 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:03.530375 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:03.530452 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:03.530800 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:04.030214 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:04.030295 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:04.030627 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:04.530317 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:04.530420 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:04.530765 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:05.030596 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:05.030677 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:05.031020 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:05.031084 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:05.530367 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:05.530441 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:05.530720 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:06.030267 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:06.030359 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:06.030720 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:06.530416 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:06.530489 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:06.530819 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:07.030188 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:07.030264 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:07.030539 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:07.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:07.530283 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:07.530594 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:07.530643 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:08.030372 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:08.030476 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:08.030889 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:08.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:08.530253 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:08.530521 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:09.031107 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:09.031182 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:09.031487 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:09.530173 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:09.530246 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:09.530588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:10.030187 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:10.030261 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:10.030583 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:10.030632 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:10.530212 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:10.530285 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:10.530616 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:11.030321 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:11.030401 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:11.030717 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:11.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:11.530260 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:11.530567 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:12.030265 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:12.030345 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:12.030692 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:12.030750 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:12.530410 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:12.530491 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:12.530831 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:13.030508 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:13.030583 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:13.030845 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:13.530215 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:13.530293 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:13.530600 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:14.030276 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:14.030355 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:14.030683 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:14.530187 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:14.530269 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:14.530570 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:14.530619 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:15.030634 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:15.030728 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:15.031132 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:15.530902 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:15.530978 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:15.531320 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:16.031133 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:16.031225 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:16.031608 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:16.530217 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:16.530290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:16.530631 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:16.530691 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:17.030347 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:17.030434 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:17.030771 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:17.530190 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:17.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:17.530547 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:18.030304 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:18.030396 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:18.030847 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:18.530228 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:18.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:18.530658 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:19.030405 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:19.030487 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:19.030753 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:19.030793 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:19.530210 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:19.530287 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:19.530630 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:20.030471 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:20.030552 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:20.030904 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:20.530566 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:20.530649 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:20.530928 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:21.030264 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:21.030341 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:21.030695 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:21.530387 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:21.530465 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:21.530798 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:21.530854 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:22.030489 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:22.030560 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:22.030836 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:22.530229 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:22.530311 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:22.530651 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:23.030359 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:23.030435 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:23.030790 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:23.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:23.530257 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:23.530538 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:24.030285 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:24.030362 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:24.030720 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:24.030784 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:24.530459 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:24.530561 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:24.530890 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:25.030748 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:25.030819 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:25.031092 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:25.530805 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:25.530886 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:25.531202 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:26.030981 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:26.031066 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:26.031447 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:26.031506 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:26.530164 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:26.530240 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:26.530509 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:27.030231 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:27.030322 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:27.030709 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:27.530233 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:27.530309 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:27.530655 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:28.030347 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:28.030421 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:28.030710 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:28.530223 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:28.530325 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:28.530633 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:28.530685 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:29.030667 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:29.030745 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:29.031082 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:29.530788 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:29.530865 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:29.531130 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:30.031110 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:30.031196 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:30.031505 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:30.530206 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:30.530283 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:30.530647 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:30.530712 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:31.030226 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:31.030304 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:31.030570 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:31.530239 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:31.530328 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:31.530675 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:32.030374 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:32.030452 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:32.030786 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:32.530472 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:32.530546 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:32.530828 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:32.530875 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:33.030256 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:33.030344 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:33.030721 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:33.530199 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:33.530277 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:33.530635 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:34.030369 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:34.030448 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:34.030741 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:34.530447 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:34.530523 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:34.530886 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:34.530946 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:35.030720 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:35.030802 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:35.031141 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:35.530926 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:35.530998 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:35.531261 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:36.031086 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:36.031162 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:36.031504 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:36.530199 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:36.530272 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:36.530613 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:37.030185 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:37.030264 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:37.030544 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:37.030595 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:37.530230 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:37.530306 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:37.530649 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:38.030265 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:38.030343 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:38.030718 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:38.530403 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:38.530475 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:38.530749 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:39.030746 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:39.030820 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:39.031148 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:39.031206 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:39.530917 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:39.530990 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:39.531311 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:40.031557 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:40.031645 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:40.032005 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:40.530747 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:40.530827 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:40.531128 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:41.030902 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:41.030984 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:41.031306 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:41.031363 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:41.531112 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:41.531189 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:41.531510 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:42.030251 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:42.030333 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:42.030713 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:42.530283 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:42.530359 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:42.530684 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:43.030191 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:43.030262 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:43.030522 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:43.530207 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:43.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:43.530656 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:43.530713 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:44.030378 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:44.030455 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:44.030782 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:44.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:44.530264 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:44.530529 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:45.030577 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:45.030660 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:45.030993 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:45.530696 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:45.530777 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:45.531097 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:45.531153 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:46.030857 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:46.030933 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:46.031262 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:46.530790 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:46.530867 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:46.531226 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:47.031053 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:47.031133 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:47.031466 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:47.530800 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:47.530875 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:47.531148 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:47.531197 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:48.030951 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:48.031025 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:48.031389 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:48.531201 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:48.531290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:48.531669 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:49.030366 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:49.030437 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:49.030705 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:49.530219 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:49.530300 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:49.530631 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:50.030617 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:50.030700 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:50.031089 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:50.031150 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:50.530889 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:50.530962 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:50.531299 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:51.031099 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:51.031173 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:51.031503 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:51.530225 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:51.530303 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:51.530635 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:52.030871 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:52.030945 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:52.031227 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:52.031267 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:52.531049 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:52.531125 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:52.531452 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:53.030183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:53.030272 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:53.030616 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:53.530338 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:53.530458 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:53.530734 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:54.030429 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:54.030506 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:54.030853 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:54.530381 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:54.530458 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:54.530812 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:54.530872 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:55.030902 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:55.030979 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:55.031278 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:55.531074 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:55.531160 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:55.531517 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:56.030253 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:56.030336 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:56.030686 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:56.530868 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:56.530948 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:56.531219 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:56.531259 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:57.031079 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:57.031159 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:57.031538 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:57.530232 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:57.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:57.530630 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:58.030200 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:58.030274 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:58.030548 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:58.530218 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:58.530297 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:58.530631 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:59.030407 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:59.030481 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:59.030824 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:59.030888 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:59.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:59.530255 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:59.530525 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:00.030580 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:00.030665 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:00.031061 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:00.536031 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:00.536130 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:00.536497 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:01.030384 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:01.030461 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:01.030805 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:01.530510 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:01.530586 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:01.531038 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:01.531093 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:02.030519 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:02.030596 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:02.030885 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:02.530601 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:02.530679 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:02.531024 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:03.030770 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:03.030845 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:03.031172 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:03.530923 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:03.531000 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:03.531348 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:03.531399 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:04.031188 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:04.031267 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:04.031578 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:04.530221 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:04.530301 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:04.530665 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:05.030434 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:05.030511 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:05.030794 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:05.530486 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:05.530562 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:05.530936 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:06.030547 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:06.030629 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:06.031023 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:06.031086 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:06.530803 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:06.530879 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:06.531191 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:07.031034 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:07.031122 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:07.031472 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:07.530868 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:07.530945 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:07.531250 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:08.031006 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:08.031085 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:08.031378 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:08.031422 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:08.530162 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:08.530246 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:08.530602 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:09.030388 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:09.030461 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:09.030757 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:09.530206 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:09.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:09.530546 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:10.031113 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:10.031190 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:10.031553 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:10.031610 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:10.530237 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:10.530307 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:10.530634 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:11.030979 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:11.031054 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:11.031384 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:11.531138 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:11.531212 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:11.531564 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:12.030178 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:12.030257 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:12.030588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:12.530188 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:12.530255 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:12.530534 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:12.530573 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:13.030280 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:13.030360 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:13.030766 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:13.530219 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:13.530304 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:13.530671 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:14.030210 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:14.030285 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:14.030552 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:14.530221 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:14.530293 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:14.530608 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:14.530657 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:15.030494 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:15.030575 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:15.030910 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:15.530437 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:15.530554 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:15.530900 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:16.030216 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:16.030305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:16.030644 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:16.530338 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:16.530413 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:16.530783 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:16.530841 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:17.030494 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:17.030561 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:17.030881 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:17.530218 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:17.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:17.530621 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:18.030226 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:18.030315 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:18.030655 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:18.530196 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:18.530278 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:18.530553 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:19.031062 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:19.031145 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:19.031472 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:19.031531 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:19.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:19.530282 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:19.530610 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:20.030544 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:20.030624 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:20.030925 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:20.530202 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:20.530275 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:20.530644 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:21.030361 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:21.030463 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:21.030812 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:21.530486 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:21.530558 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:21.530871 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:21.530921 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:22.030256 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:22.030338 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:22.030680 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:22.530233 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:22.530313 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:22.530661 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:23.030227 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:23.030300 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:23.030564 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:23.530250 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:23.530342 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:23.530762 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:24.030398 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:24.030590 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:24.031036 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:24.031106 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:24.530825 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:24.530892 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:24.531165 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:25.031145 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:25.031231 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:25.031590 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:25.530287 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:25.530385 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:25.530787 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:26.030475 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:26.030551 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:26.030935 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:26.530656 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:26.530743 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:26.531109 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:26.531165 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:27.030982 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:27.031070 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:27.031412 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:27.530790 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:27.530858 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:27.531125 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:28.030995 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:28.031074 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:28.031452 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:28.530173 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:28.530254 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:28.530600 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:29.030339 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:29.030432 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:29.030724 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:29.030767 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:29.530501 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:29.530583 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:29.530943 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:30.030872 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:30.030956 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:30.031277 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:30.531026 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:30.531096 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:30.531388 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:31.031173 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:31.031248 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:31.031592 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:31.031655 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:31.530219 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:31.530290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:31.530619 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:32.030304 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:32.030380 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:32.030665 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:32.530207 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:32.530286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:32.530634 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:33.030347 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:33.030429 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:33.030767 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:33.530186 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:33.530259 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:33.530528 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:33.530569 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:34.030234 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:34.030320 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:34.030648 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:34.530224 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:34.530303 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:34.530668 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:35.030514 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:35.030598 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:35.030879 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:35.530539 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:35.530621 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:35.530944 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:35.530999 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:36.030792 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:36.030868 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:36.031197 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:36.530952 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:36.531027 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:36.531293 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:37.031128 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:37.031222 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:37.031596 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:37.530210 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:37.530284 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:37.530618 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:38.030192 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:38.030278 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:38.030552 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:38.030630 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:38.530218 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:38.530293 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:38.530633 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:39.030659 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:39.030738 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:39.031056 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:39.530839 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:39.530914 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:39.531181 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:40.031117 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:40.031198 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:40.031558 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:40.031631 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:40.530210 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:40.530292 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:40.530625 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:41.030178 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:41.030256 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:41.030535 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:41.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:41.530290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:41.530641 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:42.030366 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:42.030455 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:42.030891 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:42.530441 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:42.530519 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:42.530792 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:42.530833 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:43.030494 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:43.030585 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:43.030904 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:43.530228 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:43.530306 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:43.530630 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:44.030307 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:44.030381 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:44.030707 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:44.530227 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:44.530296 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:44.530588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:45.031339 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:45.031427 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:45.031745 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:45.031809 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:45.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:45.530276 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:45.530545 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:46.030239 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:46.030321 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:46.030687 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:46.530366 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:46.530439 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:46.530762 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:47.030423 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:47.030519 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:47.030809 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:47.530502 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:47.530580 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:47.530914 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:47.530970 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:48.030658 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:48.030731 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:48.031047 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:48.530357 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:48.530426 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:48.530764 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:49.030805 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:49.030882 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:49.031204 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:49.530982 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:49.531053 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:49.531372 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:49.531427 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:50.031030 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:50.031115 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:50.031532 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:50.530205 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:50.530299 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:50.530623 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:51.030207 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:51.030286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:51.030614 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:51.530323 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:51.530401 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:51.530711 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:52.030254 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:52.030329 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:52.030627 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:52.030687 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:52.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:52.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:52.530659 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:53.030195 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:53.030278 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:53.030640 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:53.530268 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:53.530359 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:53.530765 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:54.030501 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:54.030589 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:54.030906 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:54.030956 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:54.530370 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:54.530458 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:54.530775 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:55.030784 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:55.030866 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:55.031248 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:55.531028 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:55.531111 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:55.531412 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:56.031149 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:56.031232 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:56.031533 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:56.031587 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:56.530201 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:56.530282 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:56.530600 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:57.030260 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:57.030338 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:57.030673 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:57.530209 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:57.530285 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:57.530565 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:58.030268 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:58.030347 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:58.030688 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:58.530418 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:58.530493 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:58.530876 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:58.530935 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:59.030221 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:59.030349 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:59.030710 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:59.530411 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:59.530486 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:59.530845 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:46:00.030785 2968376 type.go:168] "Request Body" body=""
	I1217 10:46:00.030868 2968376 node_ready.go:38] duration metric: took 6m0.00085226s for node "functional-232588" to be "Ready" ...
	I1217 10:46:00.039967 2968376 out.go:203] 
	W1217 10:46:00.043066 2968376 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 10:46:00.043095 2968376 out.go:285] * 
	W1217 10:46:00.047185 2968376 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 10:46:00.056487 2968376 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 10:46:07 functional-232588 containerd[5229]: time="2025-12-17T10:46:07.708521582Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 10:46:08 functional-232588 containerd[5229]: time="2025-12-17T10:46:08.757378641Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 17 10:46:08 functional-232588 containerd[5229]: time="2025-12-17T10:46:08.760264252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 17 10:46:08 functional-232588 containerd[5229]: time="2025-12-17T10:46:08.768823414Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 10:46:08 functional-232588 containerd[5229]: time="2025-12-17T10:46:08.769154590Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 10:46:09 functional-232588 containerd[5229]: time="2025-12-17T10:46:09.719423863Z" level=info msg="No images store for sha256:e51c5bc238d591cfa792477ad36236d6d751433afed0e22641b208b8c42c89b3"
	Dec 17 10:46:09 functional-232588 containerd[5229]: time="2025-12-17T10:46:09.721523450Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-232588\""
	Dec 17 10:46:09 functional-232588 containerd[5229]: time="2025-12-17T10:46:09.728263412Z" level=info msg="ImageCreate event name:\"sha256:139b28e7c45f6120a651876f7db60c8dc8c2da89658d2cb729b8871bf45e8e9c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 10:46:09 functional-232588 containerd[5229]: time="2025-12-17T10:46:09.728732784Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-232588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 10:46:10 functional-232588 containerd[5229]: time="2025-12-17T10:46:10.564291348Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 17 10:46:10 functional-232588 containerd[5229]: time="2025-12-17T10:46:10.566764136Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 17 10:46:10 functional-232588 containerd[5229]: time="2025-12-17T10:46:10.568714737Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 17 10:46:10 functional-232588 containerd[5229]: time="2025-12-17T10:46:10.581031489Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 17 10:46:11 functional-232588 containerd[5229]: time="2025-12-17T10:46:11.638544848Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 17 10:46:11 functional-232588 containerd[5229]: time="2025-12-17T10:46:11.640682209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 17 10:46:11 functional-232588 containerd[5229]: time="2025-12-17T10:46:11.647896951Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 10:46:11 functional-232588 containerd[5229]: time="2025-12-17T10:46:11.648373953Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 10:46:11 functional-232588 containerd[5229]: time="2025-12-17T10:46:11.669869134Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 17 10:46:11 functional-232588 containerd[5229]: time="2025-12-17T10:46:11.672204243Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 17 10:46:11 functional-232588 containerd[5229]: time="2025-12-17T10:46:11.674145441Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 17 10:46:11 functional-232588 containerd[5229]: time="2025-12-17T10:46:11.681979705Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 17 10:46:11 functional-232588 containerd[5229]: time="2025-12-17T10:46:11.815453253Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 17 10:46:11 functional-232588 containerd[5229]: time="2025-12-17T10:46:11.817833924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 17 10:46:11 functional-232588 containerd[5229]: time="2025-12-17T10:46:11.824862618Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 10:46:11 functional-232588 containerd[5229]: time="2025-12-17T10:46:11.825333450Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:46:13.599447    9231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:46:13.600324    9231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:46:13.602114    9231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:46:13.602721    9231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:46:13.604319    9231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +26.532481] overlayfs: idmapped layers are currently not supported
	[Dec17 09:26] overlayfs: idmapped layers are currently not supported
	[Dec17 09:27] overlayfs: idmapped layers are currently not supported
	[Dec17 09:29] overlayfs: idmapped layers are currently not supported
	[Dec17 09:31] overlayfs: idmapped layers are currently not supported
	[Dec17 09:41] overlayfs: idmapped layers are currently not supported
	[Dec17 09:43] overlayfs: idmapped layers are currently not supported
	[Dec17 09:44] overlayfs: idmapped layers are currently not supported
	[  +5.066669] overlayfs: idmapped layers are currently not supported
	[ +38.827173] overlayfs: idmapped layers are currently not supported
	[Dec17 09:45] overlayfs: idmapped layers are currently not supported
	[Dec17 09:46] overlayfs: idmapped layers are currently not supported
	[Dec17 09:48] overlayfs: idmapped layers are currently not supported
	[  +5.468161] overlayfs: idmapped layers are currently not supported
	[Dec17 09:49] overlayfs: idmapped layers are currently not supported
	[  +4.263444] overlayfs: idmapped layers are currently not supported
	[Dec17 09:50] overlayfs: idmapped layers are currently not supported
	[Dec17 10:07] overlayfs: idmapped layers are currently not supported
	[Dec17 10:08] overlayfs: idmapped layers are currently not supported
	[Dec17 10:10] overlayfs: idmapped layers are currently not supported
	[Dec17 10:11] overlayfs: idmapped layers are currently not supported
	[Dec17 10:13] overlayfs: idmapped layers are currently not supported
	[Dec17 10:15] overlayfs: idmapped layers are currently not supported
	[Dec17 10:16] overlayfs: idmapped layers are currently not supported
	[Dec17 10:21] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 10:46:13 up 16:28,  0 user,  load average: 0.92, 0.39, 0.80
	Linux functional-232588 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 10:46:10 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:46:10 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 823.
	Dec 17 10:46:10 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:11 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:11 functional-232588 kubelet[9007]: E1217 10:46:11.054731    9007 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:46:11 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:46:11 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:46:11 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 824.
	Dec 17 10:46:11 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:11 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:11 functional-232588 kubelet[9102]: E1217 10:46:11.811888    9102 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:46:11 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:46:11 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:46:12 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 825.
	Dec 17 10:46:12 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:12 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:12 functional-232588 kubelet[9127]: E1217 10:46:12.573117    9127 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:46:12 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:46:12 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:46:13 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 17 10:46:13 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:13 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:13 functional-232588 kubelet[9154]: E1217 10:46:13.319870    9154 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:46:13 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:46:13 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588: exit status 2 (339.920895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-232588" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (2.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (2.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-232588 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-232588 get pods: exit status 1 (105.888236ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-232588 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-232588
helpers_test.go:244: (dbg) docker inspect functional-232588:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55",
	        "Created": "2025-12-17T10:31:38.417629873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2962990,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T10:31:38.484538313Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/hostname",
	        "HostsPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/hosts",
	        "LogPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55-json.log",
	        "Name": "/functional-232588",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-232588:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-232588",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55",
	                "LowerDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813-init/diff:/var/lib/docker/overlay2/aa1c3cb837db05afa9c265c464cc269fa9c11658f422c1c8858e1287ac952f12/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-232588",
	                "Source": "/var/lib/docker/volumes/functional-232588/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-232588",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-232588",
	                "name.minikube.sigs.k8s.io": "functional-232588",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cf91fdea6bf1c59282af014fad74b29d2456698ebca9b6be8c9685054b7d7df4",
	            "SandboxKey": "/var/run/docker/netns/cf91fdea6bf1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35733"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35734"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35737"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35735"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35736"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-232588": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:06:f9:5f:98:3e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf7a63e100f8f97f0cb760b53960a5eaeb1f5054bead79442486fc1d51c01ab7",
	                    "EndpointID": "a3f4d6de946fb68269c7790dce129934f895a840ec5cebbe87fc0d49cb575c44",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-232588",
	                        "f67a3fa8da99"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-232588 -n functional-232588
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232588 -n functional-232588: exit status 2 (306.502528ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-232588 logs -n 25: (1.002437975s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                         ARGS                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-626013 image ls --format short --alsologtostderr                                                                                           │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image   │ functional-626013 image ls --format yaml --alsologtostderr                                                                                            │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ ssh     │ functional-626013 ssh pgrep buildkitd                                                                                                                 │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │                     │
	│ image   │ functional-626013 image ls --format json --alsologtostderr                                                                                            │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image   │ functional-626013 image ls --format table --alsologtostderr                                                                                           │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image   │ functional-626013 image build -t localhost/my-image:functional-626013 testdata/build --alsologtostderr                                                │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image   │ functional-626013 image ls                                                                                                                            │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ delete  │ -p functional-626013                                                                                                                                  │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ start   │ -p functional-232588 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │                     │
	│ start   │ -p functional-232588 --alsologtostderr -v=8                                                                                                           │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:39 UTC │                     │
	│ cache   │ functional-232588 cache add registry.k8s.io/pause:3.1                                                                                                 │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ functional-232588 cache add registry.k8s.io/pause:3.3                                                                                                 │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ functional-232588 cache add registry.k8s.io/pause:latest                                                                                              │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ functional-232588 cache add minikube-local-cache-test:functional-232588                                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ functional-232588 cache delete minikube-local-cache-test:functional-232588                                                                            │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ list                                                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ ssh     │ functional-232588 ssh sudo crictl images                                                                                                              │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ ssh     │ functional-232588 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                    │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ ssh     │ functional-232588 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │                     │
	│ cache   │ functional-232588 cache reload                                                                                                                        │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ ssh     │ functional-232588 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                   │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ kubectl │ functional-232588 kubectl -- --context functional-232588 get pods                                                                                     │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 10:39:54
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 10:39:54.887492 2968376 out.go:360] Setting OutFile to fd 1 ...
	I1217 10:39:54.887669 2968376 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:39:54.887679 2968376 out.go:374] Setting ErrFile to fd 2...
	I1217 10:39:54.887684 2968376 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:39:54.887953 2968376 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 10:39:54.888377 2968376 out.go:368] Setting JSON to false
	I1217 10:39:54.889321 2968376 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":58945,"bootTime":1765909050,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 10:39:54.889394 2968376 start.go:143] virtualization:  
	I1217 10:39:54.892820 2968376 out.go:179] * [functional-232588] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 10:39:54.896642 2968376 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 10:39:54.896710 2968376 notify.go:221] Checking for updates...
	I1217 10:39:54.900325 2968376 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 10:39:54.903432 2968376 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:39:54.906306 2968376 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 10:39:54.909105 2968376 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 10:39:54.911889 2968376 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 10:39:54.915217 2968376 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 10:39:54.915331 2968376 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 10:39:54.937972 2968376 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 10:39:54.938091 2968376 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:39:55.000760 2968376 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 10:39:54.991784263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:39:55.000879 2968376 docker.go:319] overlay module found
	I1217 10:39:55.005745 2968376 out.go:179] * Using the docker driver based on existing profile
	I1217 10:39:55.010762 2968376 start.go:309] selected driver: docker
	I1217 10:39:55.010794 2968376 start.go:927] validating driver "docker" against &{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:39:55.010914 2968376 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 10:39:55.011044 2968376 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:39:55.065164 2968376 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 10:39:55.056463493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:39:55.065569 2968376 cni.go:84] Creating CNI manager for ""
	I1217 10:39:55.065633 2968376 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 10:39:55.065694 2968376 start.go:353] cluster config:
	{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:39:55.070664 2968376 out.go:179] * Starting "functional-232588" primary control-plane node in "functional-232588" cluster
	I1217 10:39:55.073373 2968376 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 10:39:55.076286 2968376 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 10:39:55.079282 2968376 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 10:39:55.079315 2968376 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 10:39:55.079350 2968376 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1217 10:39:55.079358 2968376 cache.go:65] Caching tarball of preloaded images
	I1217 10:39:55.079437 2968376 preload.go:238] Found /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 10:39:55.079447 2968376 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1217 10:39:55.079550 2968376 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/config.json ...
	I1217 10:39:55.100219 2968376 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 10:39:55.100251 2968376 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 10:39:55.100265 2968376 cache.go:243] Successfully downloaded all kic artifacts
	I1217 10:39:55.100297 2968376 start.go:360] acquireMachinesLock for functional-232588: {Name:mkb7828f32963a62377c74058da795e63eb677f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 10:39:55.100355 2968376 start.go:364] duration metric: took 36.061µs to acquireMachinesLock for "functional-232588"
	I1217 10:39:55.100378 2968376 start.go:96] Skipping create...Using existing machine configuration
	I1217 10:39:55.100389 2968376 fix.go:54] fixHost starting: 
	I1217 10:39:55.100690 2968376 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:39:55.118322 2968376 fix.go:112] recreateIfNeeded on functional-232588: state=Running err=<nil>
	W1217 10:39:55.118352 2968376 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 10:39:55.121614 2968376 out.go:252] * Updating the running docker "functional-232588" container ...
	I1217 10:39:55.121666 2968376 machine.go:94] provisionDockerMachine start ...
	I1217 10:39:55.121762 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:55.140448 2968376 main.go:143] libmachine: Using SSH client type: native
	I1217 10:39:55.140568 2968376 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:39:55.140576 2968376 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 10:39:55.272992 2968376 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232588
	
	I1217 10:39:55.273058 2968376 ubuntu.go:182] provisioning hostname "functional-232588"
	I1217 10:39:55.273155 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:55.294100 2968376 main.go:143] libmachine: Using SSH client type: native
	I1217 10:39:55.294200 2968376 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:39:55.294209 2968376 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-232588 && echo "functional-232588" | sudo tee /etc/hostname
	I1217 10:39:55.433566 2968376 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232588
	
	I1217 10:39:55.433651 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:55.452012 2968376 main.go:143] libmachine: Using SSH client type: native
	I1217 10:39:55.452130 2968376 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:39:55.452152 2968376 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-232588' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-232588/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-232588' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 10:39:55.584734 2968376 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 10:39:55.584801 2968376 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 10:39:55.584835 2968376 ubuntu.go:190] setting up certificates
	I1217 10:39:55.584846 2968376 provision.go:84] configureAuth start
	I1217 10:39:55.584917 2968376 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232588
	I1217 10:39:55.602169 2968376 provision.go:143] copyHostCerts
	I1217 10:39:55.602226 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 10:39:55.602261 2968376 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 10:39:55.602273 2968376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 10:39:55.602347 2968376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 10:39:55.602482 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 10:39:55.602507 2968376 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 10:39:55.602512 2968376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 10:39:55.602540 2968376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 10:39:55.602588 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 10:39:55.602609 2968376 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 10:39:55.602618 2968376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 10:39:55.602651 2968376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 10:39:55.602701 2968376 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.functional-232588 san=[127.0.0.1 192.168.49.2 functional-232588 localhost minikube]
	I1217 10:39:55.859794 2968376 provision.go:177] copyRemoteCerts
	I1217 10:39:55.859877 2968376 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 10:39:55.859950 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:55.877144 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:55.974879 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 10:39:55.974962 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 10:39:55.992960 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 10:39:55.993024 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 10:39:56.017007 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 10:39:56.017075 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 10:39:56.039037 2968376 provision.go:87] duration metric: took 454.177473ms to configureAuth
	I1217 10:39:56.039062 2968376 ubuntu.go:206] setting minikube options for container-runtime
	I1217 10:39:56.039248 2968376 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 10:39:56.039255 2968376 machine.go:97] duration metric: took 917.583269ms to provisionDockerMachine
	I1217 10:39:56.039263 2968376 start.go:293] postStartSetup for "functional-232588" (driver="docker")
	I1217 10:39:56.039274 2968376 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 10:39:56.039330 2968376 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 10:39:56.039374 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:56.064674 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:56.164379 2968376 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 10:39:56.167903 2968376 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1217 10:39:56.167924 2968376 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1217 10:39:56.167929 2968376 command_runner.go:130] > VERSION_ID="12"
	I1217 10:39:56.167934 2968376 command_runner.go:130] > VERSION="12 (bookworm)"
	I1217 10:39:56.167939 2968376 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1217 10:39:56.167943 2968376 command_runner.go:130] > ID=debian
	I1217 10:39:56.167947 2968376 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1217 10:39:56.167952 2968376 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1217 10:39:56.167958 2968376 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1217 10:39:56.168026 2968376 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 10:39:56.168043 2968376 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 10:39:56.168054 2968376 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 10:39:56.168116 2968376 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 10:39:56.168193 2968376 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 10:39:56.168199 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> /etc/ssl/certs/29245742.pem
	I1217 10:39:56.168276 2968376 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts -> hosts in /etc/test/nested/copy/2924574
	I1217 10:39:56.168280 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts -> /etc/test/nested/copy/2924574/hosts
	I1217 10:39:56.168325 2968376 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2924574
	I1217 10:39:56.175992 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 10:39:56.194065 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts --> /etc/test/nested/copy/2924574/hosts (40 bytes)
	I1217 10:39:56.211618 2968376 start.go:296] duration metric: took 172.340234ms for postStartSetup
	I1217 10:39:56.211696 2968376 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 10:39:56.211740 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:56.229142 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:56.321408 2968376 command_runner.go:130] > 18%
	I1217 10:39:56.321497 2968376 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 10:39:56.325775 2968376 command_runner.go:130] > 160G
	I1217 10:39:56.326243 2968376 fix.go:56] duration metric: took 1.225850623s for fixHost
	I1217 10:39:56.326261 2968376 start.go:83] releasing machines lock for "functional-232588", held for 1.22589425s
	I1217 10:39:56.326382 2968376 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232588
	I1217 10:39:56.351440 2968376 ssh_runner.go:195] Run: cat /version.json
	I1217 10:39:56.351467 2968376 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 10:39:56.351509 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:56.351532 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:56.377953 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:56.378286 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:56.472298 2968376 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1217 10:39:56.558575 2968376 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1217 10:39:56.561329 2968376 ssh_runner.go:195] Run: systemctl --version
	I1217 10:39:56.567378 2968376 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1217 10:39:56.567418 2968376 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1217 10:39:56.567866 2968376 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1217 10:39:56.572178 2968376 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1217 10:39:56.572242 2968376 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 10:39:56.572327 2968376 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 10:39:56.580077 2968376 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 10:39:56.580102 2968376 start.go:496] detecting cgroup driver to use...
	I1217 10:39:56.580153 2968376 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 10:39:56.580207 2968376 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 10:39:56.595473 2968376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 10:39:56.608619 2968376 docker.go:218] disabling cri-docker service (if available) ...
	I1217 10:39:56.608683 2968376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 10:39:56.624626 2968376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 10:39:56.639198 2968376 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 10:39:56.750544 2968376 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 10:39:56.881240 2968376 docker.go:234] disabling docker service ...
	I1217 10:39:56.881321 2968376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 10:39:56.896533 2968376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 10:39:56.909686 2968376 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 10:39:57.029179 2968376 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 10:39:57.147650 2968376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 10:39:57.160165 2968376 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 10:39:57.172821 2968376 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1217 10:39:57.174291 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 10:39:57.183184 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 10:39:57.192049 2968376 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 10:39:57.192173 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 10:39:57.201301 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 10:39:57.210430 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 10:39:57.219288 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 10:39:57.228051 2968376 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 10:39:57.235994 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 10:39:57.245724 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 10:39:57.254416 2968376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 10:39:57.263062 2968376 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 10:39:57.269668 2968376 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1217 10:39:57.270584 2968376 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 10:39:57.278345 2968376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 10:39:57.386138 2968376 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 10:39:57.532674 2968376 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 10:39:57.532750 2968376 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 10:39:57.536608 2968376 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1217 10:39:57.536637 2968376 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1217 10:39:57.536644 2968376 command_runner.go:130] > Device: 0,72	Inode: 1613        Links: 1
	I1217 10:39:57.536652 2968376 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 10:39:57.536659 2968376 command_runner.go:130] > Access: 2025-12-17 10:39:57.469772687 +0000
	I1217 10:39:57.536664 2968376 command_runner.go:130] > Modify: 2025-12-17 10:39:57.469772687 +0000
	I1217 10:39:57.536669 2968376 command_runner.go:130] > Change: 2025-12-17 10:39:57.469772687 +0000
	I1217 10:39:57.536673 2968376 command_runner.go:130] >  Birth: -
	I1217 10:39:57.537168 2968376 start.go:564] Will wait 60s for crictl version
	I1217 10:39:57.537224 2968376 ssh_runner.go:195] Run: which crictl
	I1217 10:39:57.540827 2968376 command_runner.go:130] > /usr/local/bin/crictl
	I1217 10:39:57.541302 2968376 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 10:39:57.573267 2968376 command_runner.go:130] > Version:  0.1.0
	I1217 10:39:57.573463 2968376 command_runner.go:130] > RuntimeName:  containerd
	I1217 10:39:57.573480 2968376 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1217 10:39:57.573656 2968376 command_runner.go:130] > RuntimeApiVersion:  v1
	I1217 10:39:57.575908 2968376 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 10:39:57.575979 2968376 ssh_runner.go:195] Run: containerd --version
	I1217 10:39:57.593702 2968376 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1217 10:39:57.595828 2968376 ssh_runner.go:195] Run: containerd --version
	I1217 10:39:57.613025 2968376 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1217 10:39:57.620756 2968376 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1217 10:39:57.623690 2968376 cli_runner.go:164] Run: docker network inspect functional-232588 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 10:39:57.639560 2968376 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 10:39:57.643332 2968376 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1217 10:39:57.643691 2968376 kubeadm.go:884] updating cluster {Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 10:39:57.643808 2968376 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 10:39:57.643873 2968376 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 10:39:57.668138 2968376 command_runner.go:130] > {
	I1217 10:39:57.668155 2968376 command_runner.go:130] >   "images":  [
	I1217 10:39:57.668160 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668169 2968376 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 10:39:57.668174 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668179 2968376 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 10:39:57.668183 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668187 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668196 2968376 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1217 10:39:57.668199 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668204 2968376 command_runner.go:130] >       "size":  "40636774",
	I1217 10:39:57.668208 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668212 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668215 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668218 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668226 2968376 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 10:39:57.668231 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668236 2968376 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 10:39:57.668239 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668244 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668252 2968376 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 10:39:57.668260 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668264 2968376 command_runner.go:130] >       "size":  "8034419",
	I1217 10:39:57.668267 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668271 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668274 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668278 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668284 2968376 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 10:39:57.668288 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668293 2968376 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 10:39:57.668296 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668303 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668311 2968376 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1217 10:39:57.668314 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668319 2968376 command_runner.go:130] >       "size":  "21168808",
	I1217 10:39:57.668323 2968376 command_runner.go:130] >       "username":  "nonroot",
	I1217 10:39:57.668327 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668330 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668333 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668340 2968376 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1217 10:39:57.668344 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668348 2968376 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1217 10:39:57.668351 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668355 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668363 2968376 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1217 10:39:57.668366 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668370 2968376 command_runner.go:130] >       "size":  "21749640",
	I1217 10:39:57.668375 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.668379 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.668382 2968376 command_runner.go:130] >       },
	I1217 10:39:57.668386 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668390 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668393 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668396 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668405 2968376 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1217 10:39:57.668409 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668433 2968376 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1217 10:39:57.668438 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668442 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668450 2968376 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1217 10:39:57.668454 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668458 2968376 command_runner.go:130] >       "size":  "24692223",
	I1217 10:39:57.668461 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.668470 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.668478 2968376 command_runner.go:130] >       },
	I1217 10:39:57.668482 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668485 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668489 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668492 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668498 2968376 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1217 10:39:57.668503 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668509 2968376 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1217 10:39:57.668512 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668517 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668530 2968376 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1217 10:39:57.668537 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668542 2968376 command_runner.go:130] >       "size":  "20672157",
	I1217 10:39:57.668545 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.668549 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.668557 2968376 command_runner.go:130] >       },
	I1217 10:39:57.668562 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668576 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668580 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668583 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668589 2968376 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1217 10:39:57.668593 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668598 2968376 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1217 10:39:57.668608 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668614 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668622 2968376 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1217 10:39:57.668629 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668633 2968376 command_runner.go:130] >       "size":  "22432301",
	I1217 10:39:57.668637 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668641 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668645 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668648 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668655 2968376 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1217 10:39:57.668662 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668668 2968376 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1217 10:39:57.668672 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668678 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668689 2968376 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1217 10:39:57.668692 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668696 2968376 command_runner.go:130] >       "size":  "15405535",
	I1217 10:39:57.668702 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.668706 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.668719 2968376 command_runner.go:130] >       },
	I1217 10:39:57.668723 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668726 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.668730 2968376 command_runner.go:130] >     },
	I1217 10:39:57.668734 2968376 command_runner.go:130] >     {
	I1217 10:39:57.668740 2968376 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 10:39:57.668748 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.668753 2968376 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 10:39:57.668756 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668760 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.668767 2968376 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1217 10:39:57.668773 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.668777 2968376 command_runner.go:130] >       "size":  "267939",
	I1217 10:39:57.668781 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.668792 2968376 command_runner.go:130] >         "value":  "65535"
	I1217 10:39:57.668799 2968376 command_runner.go:130] >       },
	I1217 10:39:57.668803 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.668807 2968376 command_runner.go:130] >       "pinned":  true
	I1217 10:39:57.668810 2968376 command_runner.go:130] >     }
	I1217 10:39:57.668813 2968376 command_runner.go:130] >   ]
	I1217 10:39:57.668816 2968376 command_runner.go:130] > }
	I1217 10:39:57.671107 2968376 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 10:39:57.671128 2968376 containerd.go:534] Images already preloaded, skipping extraction
	I1217 10:39:57.671185 2968376 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 10:39:57.697059 2968376 command_runner.go:130] > {
	I1217 10:39:57.697078 2968376 command_runner.go:130] >   "images":  [
	I1217 10:39:57.697083 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697093 2968376 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 10:39:57.697108 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697114 2968376 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 10:39:57.697118 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697122 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697131 2968376 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1217 10:39:57.697142 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697147 2968376 command_runner.go:130] >       "size":  "40636774",
	I1217 10:39:57.697155 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697159 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697162 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697166 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697175 2968376 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 10:39:57.697180 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697185 2968376 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 10:39:57.697188 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697192 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697202 2968376 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 10:39:57.697205 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697209 2968376 command_runner.go:130] >       "size":  "8034419",
	I1217 10:39:57.697213 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697216 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697219 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697222 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697229 2968376 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 10:39:57.697233 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697238 2968376 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 10:39:57.697242 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697249 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697256 2968376 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1217 10:39:57.697260 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697264 2968376 command_runner.go:130] >       "size":  "21168808",
	I1217 10:39:57.697268 2968376 command_runner.go:130] >       "username":  "nonroot",
	I1217 10:39:57.697272 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697275 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697278 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697284 2968376 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1217 10:39:57.697288 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697293 2968376 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1217 10:39:57.697296 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697300 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697310 2968376 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1217 10:39:57.697314 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697318 2968376 command_runner.go:130] >       "size":  "21749640",
	I1217 10:39:57.697323 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.697327 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.697330 2968376 command_runner.go:130] >       },
	I1217 10:39:57.697334 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697338 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697341 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697344 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697350 2968376 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1217 10:39:57.697354 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697359 2968376 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1217 10:39:57.697363 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697366 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697374 2968376 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1217 10:39:57.697377 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697381 2968376 command_runner.go:130] >       "size":  "24692223",
	I1217 10:39:57.697384 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.697393 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.697396 2968376 command_runner.go:130] >       },
	I1217 10:39:57.697400 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697403 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697406 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697409 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697416 2968376 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1217 10:39:57.697419 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697425 2968376 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1217 10:39:57.697428 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697432 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697440 2968376 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1217 10:39:57.697443 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697448 2968376 command_runner.go:130] >       "size":  "20672157",
	I1217 10:39:57.697460 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.697464 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.697467 2968376 command_runner.go:130] >       },
	I1217 10:39:57.697470 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697474 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697477 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697480 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697486 2968376 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1217 10:39:57.697490 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697495 2968376 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1217 10:39:57.697498 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697501 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697509 2968376 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1217 10:39:57.697512 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697515 2968376 command_runner.go:130] >       "size":  "22432301",
	I1217 10:39:57.697519 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697523 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697526 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697530 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697536 2968376 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1217 10:39:57.697540 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697545 2968376 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1217 10:39:57.697548 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697552 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697560 2968376 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1217 10:39:57.697563 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697567 2968376 command_runner.go:130] >       "size":  "15405535",
	I1217 10:39:57.697570 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.697574 2968376 command_runner.go:130] >         "value":  "0"
	I1217 10:39:57.697578 2968376 command_runner.go:130] >       },
	I1217 10:39:57.697581 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697585 2968376 command_runner.go:130] >       "pinned":  false
	I1217 10:39:57.697588 2968376 command_runner.go:130] >     },
	I1217 10:39:57.697594 2968376 command_runner.go:130] >     {
	I1217 10:39:57.697600 2968376 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 10:39:57.697604 2968376 command_runner.go:130] >       "repoTags":  [
	I1217 10:39:57.697609 2968376 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 10:39:57.697612 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697615 2968376 command_runner.go:130] >       "repoDigests":  [
	I1217 10:39:57.697622 2968376 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1217 10:39:57.697626 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.697630 2968376 command_runner.go:130] >       "size":  "267939",
	I1217 10:39:57.697633 2968376 command_runner.go:130] >       "uid":  {
	I1217 10:39:57.697637 2968376 command_runner.go:130] >         "value":  "65535"
	I1217 10:39:57.697641 2968376 command_runner.go:130] >       },
	I1217 10:39:57.697645 2968376 command_runner.go:130] >       "username":  "",
	I1217 10:39:57.697649 2968376 command_runner.go:130] >       "pinned":  true
	I1217 10:39:57.697652 2968376 command_runner.go:130] >     }
	I1217 10:39:57.697655 2968376 command_runner.go:130] >   ]
	I1217 10:39:57.697657 2968376 command_runner.go:130] > }
	I1217 10:39:57.699989 2968376 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 10:39:57.700059 2968376 cache_images.go:86] Images are preloaded, skipping loading
	I1217 10:39:57.700081 2968376 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1217 10:39:57.700225 2968376 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-232588 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 10:39:57.700311 2968376 ssh_runner.go:195] Run: sudo crictl info
	I1217 10:39:57.722782 2968376 command_runner.go:130] > {
	I1217 10:39:57.722800 2968376 command_runner.go:130] >   "cniconfig": {
	I1217 10:39:57.722805 2968376 command_runner.go:130] >     "Networks": [
	I1217 10:39:57.722813 2968376 command_runner.go:130] >       {
	I1217 10:39:57.722822 2968376 command_runner.go:130] >         "Config": {
	I1217 10:39:57.722827 2968376 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1217 10:39:57.722835 2968376 command_runner.go:130] >           "Name": "cni-loopback",
	I1217 10:39:57.722839 2968376 command_runner.go:130] >           "Plugins": [
	I1217 10:39:57.722843 2968376 command_runner.go:130] >             {
	I1217 10:39:57.722847 2968376 command_runner.go:130] >               "Network": {
	I1217 10:39:57.722851 2968376 command_runner.go:130] >                 "ipam": {},
	I1217 10:39:57.722856 2968376 command_runner.go:130] >                 "type": "loopback"
	I1217 10:39:57.722860 2968376 command_runner.go:130] >               },
	I1217 10:39:57.722866 2968376 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1217 10:39:57.722869 2968376 command_runner.go:130] >             }
	I1217 10:39:57.722873 2968376 command_runner.go:130] >           ],
	I1217 10:39:57.722882 2968376 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1217 10:39:57.722886 2968376 command_runner.go:130] >         },
	I1217 10:39:57.722893 2968376 command_runner.go:130] >         "IFName": "lo"
	I1217 10:39:57.722896 2968376 command_runner.go:130] >       }
	I1217 10:39:57.722899 2968376 command_runner.go:130] >     ],
	I1217 10:39:57.722908 2968376 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1217 10:39:57.722912 2968376 command_runner.go:130] >     "PluginDirs": [
	I1217 10:39:57.722915 2968376 command_runner.go:130] >       "/opt/cni/bin"
	I1217 10:39:57.722919 2968376 command_runner.go:130] >     ],
	I1217 10:39:57.722923 2968376 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1217 10:39:57.722926 2968376 command_runner.go:130] >     "Prefix": "eth"
	I1217 10:39:57.722930 2968376 command_runner.go:130] >   },
	I1217 10:39:57.722933 2968376 command_runner.go:130] >   "config": {
	I1217 10:39:57.722936 2968376 command_runner.go:130] >     "cdiSpecDirs": [
	I1217 10:39:57.722940 2968376 command_runner.go:130] >       "/etc/cdi",
	I1217 10:39:57.722944 2968376 command_runner.go:130] >       "/var/run/cdi"
	I1217 10:39:57.722948 2968376 command_runner.go:130] >     ],
	I1217 10:39:57.722952 2968376 command_runner.go:130] >     "cni": {
	I1217 10:39:57.722955 2968376 command_runner.go:130] >       "binDir": "",
	I1217 10:39:57.722959 2968376 command_runner.go:130] >       "binDirs": [
	I1217 10:39:57.722962 2968376 command_runner.go:130] >         "/opt/cni/bin"
	I1217 10:39:57.722965 2968376 command_runner.go:130] >       ],
	I1217 10:39:57.722969 2968376 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1217 10:39:57.722973 2968376 command_runner.go:130] >       "confTemplate": "",
	I1217 10:39:57.722983 2968376 command_runner.go:130] >       "ipPref": "",
	I1217 10:39:57.722986 2968376 command_runner.go:130] >       "maxConfNum": 1,
	I1217 10:39:57.722991 2968376 command_runner.go:130] >       "setupSerially": false,
	I1217 10:39:57.722995 2968376 command_runner.go:130] >       "useInternalLoopback": false
	I1217 10:39:57.722998 2968376 command_runner.go:130] >     },
	I1217 10:39:57.723004 2968376 command_runner.go:130] >     "containerd": {
	I1217 10:39:57.723008 2968376 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1217 10:39:57.723013 2968376 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1217 10:39:57.723017 2968376 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1217 10:39:57.723021 2968376 command_runner.go:130] >       "runtimes": {
	I1217 10:39:57.723024 2968376 command_runner.go:130] >         "runc": {
	I1217 10:39:57.723029 2968376 command_runner.go:130] >           "ContainerAnnotations": null,
	I1217 10:39:57.723033 2968376 command_runner.go:130] >           "PodAnnotations": null,
	I1217 10:39:57.723038 2968376 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1217 10:39:57.723046 2968376 command_runner.go:130] >           "cgroupWritable": false,
	I1217 10:39:57.723050 2968376 command_runner.go:130] >           "cniConfDir": "",
	I1217 10:39:57.723054 2968376 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1217 10:39:57.723058 2968376 command_runner.go:130] >           "io_type": "",
	I1217 10:39:57.723061 2968376 command_runner.go:130] >           "options": {
	I1217 10:39:57.723065 2968376 command_runner.go:130] >             "BinaryName": "",
	I1217 10:39:57.723069 2968376 command_runner.go:130] >             "CriuImagePath": "",
	I1217 10:39:57.723074 2968376 command_runner.go:130] >             "CriuWorkPath": "",
	I1217 10:39:57.723077 2968376 command_runner.go:130] >             "IoGid": 0,
	I1217 10:39:57.723081 2968376 command_runner.go:130] >             "IoUid": 0,
	I1217 10:39:57.723085 2968376 command_runner.go:130] >             "NoNewKeyring": false,
	I1217 10:39:57.723089 2968376 command_runner.go:130] >             "Root": "",
	I1217 10:39:57.723092 2968376 command_runner.go:130] >             "ShimCgroup": "",
	I1217 10:39:57.723096 2968376 command_runner.go:130] >             "SystemdCgroup": false
	I1217 10:39:57.723100 2968376 command_runner.go:130] >           },
	I1217 10:39:57.723105 2968376 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1217 10:39:57.723111 2968376 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1217 10:39:57.723115 2968376 command_runner.go:130] >           "runtimePath": "",
	I1217 10:39:57.723120 2968376 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1217 10:39:57.723124 2968376 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1217 10:39:57.723128 2968376 command_runner.go:130] >           "snapshotter": ""
	I1217 10:39:57.723131 2968376 command_runner.go:130] >         }
	I1217 10:39:57.723134 2968376 command_runner.go:130] >       }
	I1217 10:39:57.723136 2968376 command_runner.go:130] >     },
	I1217 10:39:57.723146 2968376 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1217 10:39:57.723151 2968376 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1217 10:39:57.723156 2968376 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1217 10:39:57.723161 2968376 command_runner.go:130] >     "disableApparmor": false,
	I1217 10:39:57.723166 2968376 command_runner.go:130] >     "disableHugetlbController": true,
	I1217 10:39:57.723170 2968376 command_runner.go:130] >     "disableProcMount": false,
	I1217 10:39:57.723174 2968376 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1217 10:39:57.723177 2968376 command_runner.go:130] >     "enableCDI": true,
	I1217 10:39:57.723181 2968376 command_runner.go:130] >     "enableSelinux": false,
	I1217 10:39:57.723188 2968376 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1217 10:39:57.723195 2968376 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1217 10:39:57.723200 2968376 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1217 10:39:57.723204 2968376 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1217 10:39:57.723208 2968376 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1217 10:39:57.723212 2968376 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1217 10:39:57.723216 2968376 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1217 10:39:57.723222 2968376 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1217 10:39:57.723226 2968376 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1217 10:39:57.723231 2968376 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1217 10:39:57.723236 2968376 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1217 10:39:57.723241 2968376 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1217 10:39:57.723243 2968376 command_runner.go:130] >   },
	I1217 10:39:57.723247 2968376 command_runner.go:130] >   "features": {
	I1217 10:39:57.723251 2968376 command_runner.go:130] >     "supplemental_groups_policy": true
	I1217 10:39:57.723254 2968376 command_runner.go:130] >   },
	I1217 10:39:57.723257 2968376 command_runner.go:130] >   "golang": "go1.24.9",
	I1217 10:39:57.723267 2968376 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1217 10:39:57.723277 2968376 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1217 10:39:57.723281 2968376 command_runner.go:130] >   "runtimeHandlers": [
	I1217 10:39:57.723283 2968376 command_runner.go:130] >     {
	I1217 10:39:57.723287 2968376 command_runner.go:130] >       "features": {
	I1217 10:39:57.723291 2968376 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1217 10:39:57.723297 2968376 command_runner.go:130] >         "user_namespaces": true
	I1217 10:39:57.723299 2968376 command_runner.go:130] >       }
	I1217 10:39:57.723302 2968376 command_runner.go:130] >     },
	I1217 10:39:57.723305 2968376 command_runner.go:130] >     {
	I1217 10:39:57.723308 2968376 command_runner.go:130] >       "features": {
	I1217 10:39:57.723315 2968376 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1217 10:39:57.723319 2968376 command_runner.go:130] >         "user_namespaces": true
	I1217 10:39:57.723322 2968376 command_runner.go:130] >       },
	I1217 10:39:57.723326 2968376 command_runner.go:130] >       "name": "runc"
	I1217 10:39:57.723328 2968376 command_runner.go:130] >     }
	I1217 10:39:57.723335 2968376 command_runner.go:130] >   ],
	I1217 10:39:57.723338 2968376 command_runner.go:130] >   "status": {
	I1217 10:39:57.723342 2968376 command_runner.go:130] >     "conditions": [
	I1217 10:39:57.723345 2968376 command_runner.go:130] >       {
	I1217 10:39:57.723348 2968376 command_runner.go:130] >         "message": "",
	I1217 10:39:57.723352 2968376 command_runner.go:130] >         "reason": "",
	I1217 10:39:57.723356 2968376 command_runner.go:130] >         "status": true,
	I1217 10:39:57.723361 2968376 command_runner.go:130] >         "type": "RuntimeReady"
	I1217 10:39:57.723364 2968376 command_runner.go:130] >       },
	I1217 10:39:57.723367 2968376 command_runner.go:130] >       {
	I1217 10:39:57.723373 2968376 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1217 10:39:57.723378 2968376 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1217 10:39:57.723382 2968376 command_runner.go:130] >         "status": false,
	I1217 10:39:57.723386 2968376 command_runner.go:130] >         "type": "NetworkReady"
	I1217 10:39:57.723389 2968376 command_runner.go:130] >       },
	I1217 10:39:57.723391 2968376 command_runner.go:130] >       {
	I1217 10:39:57.723414 2968376 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1217 10:39:57.723421 2968376 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1217 10:39:57.723426 2968376 command_runner.go:130] >         "status": false,
	I1217 10:39:57.723432 2968376 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1217 10:39:57.723434 2968376 command_runner.go:130] >       }
	I1217 10:39:57.723437 2968376 command_runner.go:130] >     ]
	I1217 10:39:57.723440 2968376 command_runner.go:130] >   }
	I1217 10:39:57.723442 2968376 command_runner.go:130] > }
	I1217 10:39:57.726093 2968376 cni.go:84] Creating CNI manager for ""
	I1217 10:39:57.726119 2968376 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 10:39:57.726139 2968376 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 10:39:57.726166 2968376 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-232588 NodeName:functional-232588 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 10:39:57.726283 2968376 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-232588"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 10:39:57.726359 2968376 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 10:39:57.733320 2968376 command_runner.go:130] > kubeadm
	I1217 10:39:57.733342 2968376 command_runner.go:130] > kubectl
	I1217 10:39:57.733347 2968376 command_runner.go:130] > kubelet
	I1217 10:39:57.734253 2968376 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 10:39:57.734351 2968376 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 10:39:57.741900 2968376 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1217 10:39:57.754718 2968376 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 10:39:57.767131 2968376 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1217 10:39:57.780328 2968376 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 10:39:57.783968 2968376 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1217 10:39:57.784263 2968376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 10:39:57.891500 2968376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 10:39:58.252332 2968376 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588 for IP: 192.168.49.2
	I1217 10:39:58.252409 2968376 certs.go:195] generating shared ca certs ...
	I1217 10:39:58.252461 2968376 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:39:58.252670 2968376 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 10:39:58.252752 2968376 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 10:39:58.252788 2968376 certs.go:257] generating profile certs ...
	I1217 10:39:58.252943 2968376 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.key
	I1217 10:39:58.253053 2968376 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key.a39919a0
	I1217 10:39:58.253133 2968376 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key
	I1217 10:39:58.253172 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 10:39:58.253214 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 10:39:58.253260 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 10:39:58.253294 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 10:39:58.253341 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 10:39:58.253377 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 10:39:58.253421 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 10:39:58.253456 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 10:39:58.253577 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 10:39:58.253658 2968376 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 10:39:58.253688 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 10:39:58.253756 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 10:39:58.253819 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 10:39:58.253883 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 10:39:58.253975 2968376 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 10:39:58.254044 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.254093 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem -> /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.254126 2968376 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.254782 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 10:39:58.276977 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 10:39:58.300224 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 10:39:58.319429 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 10:39:58.338203 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 10:39:58.355898 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 10:39:58.373473 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 10:39:58.391528 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 10:39:58.408858 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 10:39:58.426819 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 10:39:58.444926 2968376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 10:39:58.462979 2968376 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 10:39:58.476114 2968376 ssh_runner.go:195] Run: openssl version
	I1217 10:39:58.483093 2968376 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 10:39:58.483240 2968376 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.490661 2968376 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 10:39:58.498193 2968376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.502204 2968376 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.502289 2968376 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.502352 2968376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:39:58.543361 2968376 command_runner.go:130] > b5213941
	I1217 10:39:58.543894 2968376 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 10:39:58.551548 2968376 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.559110 2968376 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 10:39:58.567064 2968376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.570982 2968376 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.571071 2968376 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.571149 2968376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 10:39:58.611772 2968376 command_runner.go:130] > 51391683
	I1217 10:39:58.612217 2968376 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 10:39:58.619901 2968376 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.627496 2968376 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 10:39:58.635170 2968376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.639161 2968376 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.639286 2968376 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.639343 2968376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 10:39:58.679963 2968376 command_runner.go:130] > 3ec20f2e
	I1217 10:39:58.680491 2968376 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 10:39:58.687873 2968376 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 10:39:58.691452 2968376 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 10:39:58.691483 2968376 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1217 10:39:58.691491 2968376 command_runner.go:130] > Device: 259,1	Inode: 3648630     Links: 1
	I1217 10:39:58.691498 2968376 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 10:39:58.691503 2968376 command_runner.go:130] > Access: 2025-12-17 10:35:51.067485305 +0000
	I1217 10:39:58.691508 2968376 command_runner.go:130] > Modify: 2025-12-17 10:31:46.445154139 +0000
	I1217 10:39:58.691513 2968376 command_runner.go:130] > Change: 2025-12-17 10:31:46.445154139 +0000
	I1217 10:39:58.691519 2968376 command_runner.go:130] >  Birth: 2025-12-17 10:31:46.445154139 +0000
	I1217 10:39:58.691792 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 10:39:58.732576 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.733078 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 10:39:58.773416 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.773947 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 10:39:58.814511 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.815058 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 10:39:58.855809 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.856437 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 10:39:58.897493 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.897637 2968376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 10:39:58.937941 2968376 command_runner.go:130] > Certificate will not expire
	I1217 10:39:58.938362 2968376 kubeadm.go:401] StartCluster: {Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:39:58.938478 2968376 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 10:39:58.938558 2968376 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 10:39:58.967095 2968376 cri.go:89] found id: ""
	I1217 10:39:58.967172 2968376 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 10:39:58.974207 2968376 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1217 10:39:58.974232 2968376 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1217 10:39:58.974239 2968376 command_runner.go:130] > /var/lib/minikube/etcd:
	I1217 10:39:58.975124 2968376 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 10:39:58.975142 2968376 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 10:39:58.975194 2968376 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 10:39:58.982722 2968376 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 10:39:58.983159 2968376 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-232588" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:39:58.983280 2968376 kubeconfig.go:62] /home/jenkins/minikube-integration/22182-2922712/kubeconfig needs updating (will repair): [kubeconfig missing "functional-232588" cluster setting kubeconfig missing "functional-232588" context setting]
	I1217 10:39:58.983551 2968376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:39:58.984002 2968376 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:39:58.984156 2968376 kapi.go:59] client config for functional-232588: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt", KeyFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.key", CAFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 10:39:58.984706 2968376 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 10:39:58.984730 2968376 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 10:39:58.984737 2968376 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 10:39:58.984745 2968376 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 10:39:58.984756 2968376 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 10:39:58.984794 2968376 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 10:39:58.985054 2968376 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 10:39:58.992764 2968376 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 10:39:58.992810 2968376 kubeadm.go:602] duration metric: took 17.660629ms to restartPrimaryControlPlane
	I1217 10:39:58.992820 2968376 kubeadm.go:403] duration metric: took 54.467316ms to StartCluster
	I1217 10:39:58.992834 2968376 settings.go:142] acquiring lock: {Name:mkbe14d68dd5ff3fa1749157c50305d115f5a24d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:39:58.992909 2968376 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:39:58.993526 2968376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:39:58.993746 2968376 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 10:39:58.994170 2968376 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 10:39:58.994219 2968376 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 10:39:58.994288 2968376 addons.go:70] Setting storage-provisioner=true in profile "functional-232588"
	I1217 10:39:58.994301 2968376 addons.go:239] Setting addon storage-provisioner=true in "functional-232588"
	I1217 10:39:58.994329 2968376 host.go:66] Checking if "functional-232588" exists ...
	I1217 10:39:58.994354 2968376 addons.go:70] Setting default-storageclass=true in profile "functional-232588"
	I1217 10:39:58.994416 2968376 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-232588"
	I1217 10:39:58.994775 2968376 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:39:58.994809 2968376 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:39:59.000060 2968376 out.go:179] * Verifying Kubernetes components...
	I1217 10:39:59.002988 2968376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 10:39:59.030107 2968376 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:39:59.030278 2968376 kapi.go:59] client config for functional-232588: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt", KeyFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.key", CAFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 10:39:59.030548 2968376 addons.go:239] Setting addon default-storageclass=true in "functional-232588"
	I1217 10:39:59.030583 2968376 host.go:66] Checking if "functional-232588" exists ...
	I1217 10:39:59.030999 2968376 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:39:59.046619 2968376 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 10:39:59.049547 2968376 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:39:59.049578 2968376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 10:39:59.049652 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:59.071122 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:59.078111 2968376 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 10:39:59.078138 2968376 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 10:39:59.078204 2968376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:39:59.106268 2968376 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:39:59.210035 2968376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 10:39:59.247804 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:39:59.250104 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:00.029975 2968376 node_ready.go:35] waiting up to 6m0s for node "functional-232588" to be "Ready" ...
	I1217 10:40:00.030121 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:00.030183 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:00.030443 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.030485 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.030522 2968376 retry.go:31] will retry after 293.620925ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.030561 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.030575 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.030582 2968376 retry.go:31] will retry after 156.365506ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.030650 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:00.188354 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:00.324847 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:00.436532 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.436662 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.436836 2968376 retry.go:31] will retry after 279.814099ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.516954 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.518501 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.518555 2968376 retry.go:31] will retry after 262.10287ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.531577 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:00.531724 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:00.533353 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 10:40:00.717812 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:00.781511 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:00.801403 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.801643 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.801671 2968376 retry.go:31] will retry after 799.844048ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.868602 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:00.868642 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:00.868698 2968376 retry.go:31] will retry after 554.70169ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:01.031171 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:01.031268 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:01.031636 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:01.424206 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:01.486829 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:01.486884 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:01.486903 2968376 retry.go:31] will retry after 534.910165ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:01.531036 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:01.531190 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:01.531514 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:01.601938 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:01.666361 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:01.666415 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:01.666435 2968376 retry.go:31] will retry after 494.63938ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:02.022963 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:02.030812 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:02.030945 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:02.031372 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:02.031439 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:02.093352 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:02.093469 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:02.093495 2968376 retry.go:31] will retry after 1.147395482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:02.161756 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:02.224785 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:02.224835 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:02.224873 2968376 retry.go:31] will retry after 722.380129ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:02.530243 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:02.530335 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:02.530682 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:02.948277 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:03.019220 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:03.023774 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:03.023820 2968376 retry.go:31] will retry after 1.527910453s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:03.031105 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:03.031182 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:03.031525 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:03.241898 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:03.304153 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:03.304205 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:03.304227 2968376 retry.go:31] will retry after 2.808262652s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:03.530353 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:03.530425 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:03.530767 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:04.030262 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:04.030340 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:04.030662 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:04.530190 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:04.530267 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:04.530613 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:04.530682 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:04.552783 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:04.614277 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:04.618634 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:04.618671 2968376 retry.go:31] will retry after 1.686088172s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:05.031243 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:05.031319 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:05.031611 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:05.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:05.530314 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:05.530636 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:06.030216 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:06.030295 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:06.030584 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:06.113005 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:06.174987 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:06.175028 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:06.175048 2968376 retry.go:31] will retry after 2.620064864s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:06.305352 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:06.366722 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:06.366771 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:06.366790 2968376 retry.go:31] will retry after 6.20410258s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:06.531098 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:06.531170 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:06.531510 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:06.531566 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:07.030285 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:07.030361 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:07.030703 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:07.530195 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:07.530269 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:07.530540 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:08.030245 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:08.030326 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:08.030668 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:08.530335 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:08.530413 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:08.530732 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:08.796304 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:08.853426 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:08.857034 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:08.857067 2968376 retry.go:31] will retry after 3.174722269s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:09.030586 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:09.030666 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:09.031008 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:09.031064 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:09.530804 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:09.530879 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:09.531204 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:10.031140 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:10.031218 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:10.031521 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:10.530229 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:10.530312 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:10.530588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:11.030272 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:11.030347 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:11.030674 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:11.530355 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:11.530450 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:11.530745 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:11.530788 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:12.030183 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:12.030259 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:12.030568 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:12.032754 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:12.104534 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:12.104594 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:12.104617 2968376 retry.go:31] will retry after 7.427014064s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:12.531116 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:12.531194 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:12.531510 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:12.571824 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:12.627783 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:12.631439 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:12.631473 2968376 retry.go:31] will retry after 5.673499761s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:13.031007 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:13.031079 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:13.031447 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:13.530133 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:13.530207 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:13.530473 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:14.030881 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:14.030963 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:14.031294 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:14.031348 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:14.531063 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:14.531139 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:14.531511 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:15.030415 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:15.030505 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:15.030865 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:15.530246 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:15.530327 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:15.530615 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:16.030257 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:16.030335 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:16.030683 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:16.530343 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:16.530412 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:16.530735 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:16.530792 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:17.030348 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:17.030438 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:17.030746 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:17.530427 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:17.530508 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:17.530854 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:18.031138 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:18.031239 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:18.031524 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:18.306153 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:18.363523 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:18.367149 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:18.367184 2968376 retry.go:31] will retry after 11.676089788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:18.530483 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:18.530628 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:18.530998 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:18.531054 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:19.031060 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:19.031138 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:19.031447 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:19.530144 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:19.530217 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:19.530501 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:19.532780 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:19.596086 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:19.596134 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:19.596153 2968376 retry.go:31] will retry after 6.09625298s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:20.031102 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:20.031251 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:20.031747 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:20.530743 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:20.530896 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:20.531474 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:20.531549 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:21.030954 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:21.031034 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:21.031324 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:21.531097 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:21.531170 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:21.531522 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:22.030145 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:22.030232 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:22.030617 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:22.530952 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:22.531023 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:22.531286 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:23.031049 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:23.031121 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:23.031488 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:23.031552 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:23.531151 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:23.531233 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:23.531594 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:24.030205 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:24.030271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:24.030618 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:24.530297 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:24.530374 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:24.530704 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:25.030556 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:25.030634 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:25.031013 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:25.530898 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:25.530990 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:25.531308 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:25.531351 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:25.692701 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:25.761074 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:25.761116 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:25.761134 2968376 retry.go:31] will retry after 8.308022173s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:26.030656 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:26.030736 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:26.031050 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:26.530816 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:26.530887 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:26.531227 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:27.030620 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:27.030689 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:27.030975 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:27.530810 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:27.530882 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:27.531225 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:28.031037 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:28.031121 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:28.031512 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:28.031588 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:28.530251 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:28.530319 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:28.530583 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:29.030525 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:29.030614 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:29.031053 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:29.530775 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:29.530846 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:29.531189 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:30.032544 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:30.032629 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:30.032970 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:30.033031 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:30.044190 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:30.141158 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:30.141207 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:30.141228 2968376 retry.go:31] will retry after 21.251088353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:30.530770 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:30.530848 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:30.531184 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:31.031023 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:31.031097 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:31.031429 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:31.530162 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:31.530338 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:31.530687 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:32.030318 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:32.030410 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:32.030863 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:32.530571 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:32.530648 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:32.531098 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:32.531174 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:33.030920 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:33.031010 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:33.031359 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:33.531147 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:33.531219 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:33.531570 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:34.030334 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:34.030418 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:34.030775 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:34.070045 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:34.128651 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:34.132259 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:34.132293 2968376 retry.go:31] will retry after 23.004999937s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:34.530392 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:34.530466 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:34.530735 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:35.030855 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:35.030938 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:35.031252 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:35.031308 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:35.530763 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:35.530834 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:35.531181 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:36.030980 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:36.031106 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:36.031458 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:36.530826 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:36.530905 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:36.531257 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:37.031180 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:37.031261 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:37.031662 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:37.031754 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:37.530206 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:37.530276 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:37.530649 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:38.030423 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:38.030503 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:38.030854 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:38.530587 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:38.530659 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:38.531005 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:39.030844 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:39.030924 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:39.031203 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:39.531010 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:39.531096 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:39.531446 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:39.531521 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:40.031145 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:40.031231 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:40.031658 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:40.530343 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:40.530420 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:40.530707 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:41.030992 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:41.031064 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:41.031409 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:41.531176 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:41.531252 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:41.531592 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:41.531649 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:42.030335 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:42.030418 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:42.030713 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:42.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:42.530293 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:42.530634 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:43.030224 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:43.030309 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:43.030694 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:43.530392 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:43.530468 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:43.530795 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:44.030259 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:44.030339 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:44.030666 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:44.030720 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:44.530388 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:44.530467 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:44.530803 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:45.030897 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:45.032736 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:45.034090 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 10:40:45.530857 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:45.530936 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:45.531262 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:46.031009 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:46.031081 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:46.031343 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:46.031380 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:46.531073 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:46.531152 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:46.531521 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:47.030170 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:47.030255 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:47.030602 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:47.530303 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:47.530374 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:47.530644 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:48.030323 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:48.030406 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:48.030744 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:48.530500 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:48.530605 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:48.530966 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:48.531023 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:49.030795 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:49.030871 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:49.031172 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:49.530860 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:49.530935 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:49.531267 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:50.031129 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:50.031208 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:50.031548 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:50.530224 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:50.530303 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:50.530574 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:51.030327 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:51.030401 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:51.030749 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:51.030806 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:51.393321 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:40:51.454332 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:51.458316 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:51.458350 2968376 retry.go:31] will retry after 15.302727777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:51.530571 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:51.530643 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:51.530966 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:52.030247 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:52.030332 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:52.030623 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:52.530219 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:52.530312 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:52.530691 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:53.030289 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:53.030364 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:53.030698 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:53.530380 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:53.530457 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:53.530780 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:53.530833 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:54.030549 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:54.030652 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:54.030947 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:54.530639 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:54.530716 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:54.531043 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:55.030934 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:55.031013 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:55.031455 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:55.531099 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:55.531193 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:55.531510 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:55.531578 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:56.030320 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:56.030398 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:56.030700 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:56.530201 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:56.530273 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:56.530535 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:57.030303 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:57.030373 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:57.030719 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:57.138000 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:40:57.193212 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:40:57.197444 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:57.197478 2968376 retry.go:31] will retry after 20.170499035s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:40:57.530886 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:57.530963 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:57.531316 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:58.031030 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:58.031101 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:58.031459 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:40:58.031521 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:40:58.530185 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:58.530279 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:58.530591 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:59.030603 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:59.030673 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:59.031011 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:40:59.530181 2968376 type.go:168] "Request Body" body=""
	I1217 10:40:59.530254 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:40:59.530556 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:00.031130 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:00.031217 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:00.031532 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:00.031582 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:00.530232 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:00.530306 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:00.530652 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:01.030118 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:01.030193 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:01.030459 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:01.530122 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:01.530201 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:01.530574 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:02.030319 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:02.030407 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:02.030755 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:02.530195 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:02.530276 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:02.530558 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:02.530607 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:03.030232 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:03.030305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:03.030654 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:03.530352 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:03.530433 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:03.530775 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:04.030460 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:04.030552 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:04.030847 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:04.530572 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:04.530659 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:04.530971 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:04.531027 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:05.031015 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:05.031091 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:05.031381 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:05.531137 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:05.531210 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:05.531480 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:06.030204 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:06.030287 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:06.030672 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:06.530230 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:06.530312 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:06.530661 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:06.762229 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:41:06.820073 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:06.820109 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:41:06.820128 2968376 retry.go:31] will retry after 35.040877283s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:41:07.030604 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:07.030693 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:07.030967 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:07.031017 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:07.530709 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:07.530791 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:07.531216 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:08.030859 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:08.030956 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:08.031280 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:08.531028 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:08.531110 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:08.531372 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:09.030137 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:09.030210 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:09.030518 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:09.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:09.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:09.530639 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:09.530700 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:10.030448 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:10.030530 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:10.030820 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:10.530483 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:10.530577 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:10.530870 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:11.030249 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:11.030346 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:11.030673 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:11.530364 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:11.530439 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:11.530704 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:11.530760 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:12.030256 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:12.030329 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:12.030660 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:12.530205 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:12.530277 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:12.530608 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:13.030312 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:13.030400 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:13.030672 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:13.530341 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:13.530415 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:13.530818 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:13.530880 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:14.030265 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:14.030338 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:14.030678 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:14.530384 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:14.530453 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:14.530771 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:15.030787 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:15.030877 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:15.031291 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:15.531114 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:15.531196 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:15.531528 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:15.531590 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:16.030220 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:16.030296 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:16.030573 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:16.530300 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:16.530383 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:16.530739 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:17.030458 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:17.030539 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:17.030882 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:17.368346 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:41:17.428304 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:17.431873 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:41:17.431904 2968376 retry.go:31] will retry after 38.363968078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 10:41:17.531154 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:17.531231 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:17.531502 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:18.030234 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:18.030352 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:18.030774 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:18.030859 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:18.530515 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:18.530607 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:18.530942 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:19.030903 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:19.030980 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:19.031301 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:19.530780 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:19.530855 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:19.531233 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:20.031004 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:20.031081 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:20.031456 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:20.031515 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:20.530158 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:20.530242 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:20.530554 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:21.030247 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:21.030339 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:21.030668 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:21.530378 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:21.530474 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:21.530782 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:22.030443 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:22.030541 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:22.030864 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:22.530263 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:22.530337 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:22.530672 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:22.530725 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:23.030389 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:23.030466 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:23.030819 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:23.530513 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:23.530591 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:23.530877 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:24.030399 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:24.030471 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:24.030823 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:24.530238 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:24.530307 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:24.530641 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:25.030797 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:25.030872 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:25.031158 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:25.031215 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:25.530943 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:25.531023 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:25.531343 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:26.031163 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:26.031243 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:26.031563 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:26.530221 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:26.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:26.530613 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:27.030270 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:27.030340 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:27.030646 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:27.530262 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:27.530344 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:27.530672 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:27.530735 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:28.030374 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:28.030450 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:28.030789 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:28.530216 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:28.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:28.530634 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:29.030406 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:29.030498 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:29.030839 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:29.530201 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:29.530270 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:29.530587 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:30.030598 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:30.030723 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:30.031102 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:30.031168 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:30.530947 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:30.531019 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:30.531339 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:31.031114 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:31.031177 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:31.031431 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:31.531219 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:31.531303 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:31.531630 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:32.030207 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:32.030291 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:32.030641 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:32.530182 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:32.530258 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:32.530539 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:32.530590 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:33.030266 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:33.030347 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:33.030749 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:33.530421 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:33.530501 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:33.530853 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:34.030212 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:34.030287 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:34.030655 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:34.530281 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:34.530361 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:34.530710 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:34.530814 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:35.030611 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:35.030685 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:35.031034 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:35.530333 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:35.530402 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:35.530717 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:36.030300 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:36.030388 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:36.030750 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:36.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:36.530286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:36.530608 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:37.030333 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:37.030420 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:37.030757 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:37.030808 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:37.530515 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:37.530586 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:37.530947 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:38.030773 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:38.030848 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:38.031196 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:38.530440 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:38.530514 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:38.530796 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:39.030742 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:39.030829 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:39.031155 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:39.031225 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:39.530944 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:39.531015 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:39.531346 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:40.031118 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:40.031205 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:40.031497 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:40.530158 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:40.530248 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:40.530609 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:41.030336 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:41.030425 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:41.030757 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:41.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:41.530274 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:41.530533 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:41.530571 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:41.862178 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 10:41:41.923706 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:41.923759 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:41.923872 2968376 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 10:41:42.031033 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:42.031113 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:42.031454 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:42.530179 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:42.530261 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:42.530593 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:43.030183 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:43.030252 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:43.030559 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:43.530235 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:43.530318 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:43.530640 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:43.530702 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:44.030269 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:44.030350 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:44.030675 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:44.530279 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:44.530349 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:44.530638 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:45.030563 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:45.030653 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:45.031039 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:45.530936 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:45.531014 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:45.531379 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:45.531437 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:46.030694 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:46.030768 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:46.031031 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:46.530498 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:46.530586 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:46.530955 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:47.030756 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:47.030830 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:47.031181 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:47.530955 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:47.531021 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:47.531344 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:48.031149 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:48.031228 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:48.031596 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:48.031656 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:48.530229 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:48.530311 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:48.530650 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:49.030360 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:49.030435 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:49.030716 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:49.530373 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:49.530448 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:49.530795 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:50.030856 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:50.030945 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:50.031309 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:50.531043 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:50.531111 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:50.531380 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:50.531421 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:51.030173 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:51.030254 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:51.030586 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:51.530279 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:51.530362 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:51.530687 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:52.030399 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:52.030478 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:52.030876 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:52.530578 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:52.530651 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:52.531025 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:53.030854 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:53.030938 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:53.031290 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:53.031364 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:53.531095 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:53.531182 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:53.531545 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:54.030257 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:54.030339 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:54.030711 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:54.530208 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:54.530286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:54.534934 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 10:41:55.030911 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:55.031006 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:55.031280 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:55.530658 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:55.530757 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:55.531092 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:55.531146 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:55.796501 2968376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 10:41:55.858122 2968376 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:55.858175 2968376 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 10:41:55.858259 2968376 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 10:41:55.863014 2968376 out.go:179] * Enabled addons: 
	I1217 10:41:55.865747 2968376 addons.go:530] duration metric: took 1m56.871522842s for enable addons: enabled=[]
	I1217 10:41:56.030483 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:56.030561 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:56.030907 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:56.530592 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:56.530668 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:56.530973 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:57.030256 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:57.030336 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:57.030717 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:57.530234 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:57.530308 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:57.530641 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:58.033611 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:58.033711 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:58.033996 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:41:58.034053 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:41:58.530276 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:58.530381 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:58.530759 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:59.030773 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:59.030845 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:59.031207 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:41:59.531008 2968376 type.go:168] "Request Body" body=""
	I1217 10:41:59.531115 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:41:59.531404 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:00.030325 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:00.030471 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:00.030856 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:00.530785 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:00.530901 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:00.531226 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:00.531288 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:01.030976 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:01.031043 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:01.031299 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:01.531109 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:01.531187 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:01.531522 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:02.030231 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:02.030334 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:02.030695 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:02.530421 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:02.530490 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:02.530829 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:03.030527 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:03.030623 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:03.030985 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:03.031044 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:03.530805 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:03.530890 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:03.531241 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:04.030644 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:04.030719 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:04.031014 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:04.530743 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:04.530821 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:04.531126 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:05.030982 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:05.031061 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:05.031449 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:05.031509 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:05.530161 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:05.530231 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:05.530503 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:06.030268 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:06.030373 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:06.030811 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:06.530502 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:06.530577 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:06.530933 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:07.030646 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:07.030722 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:07.031021 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:07.530377 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:07.530455 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:07.530792 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:07.530847 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:08.030509 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:08.030589 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:08.030943 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:08.530625 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:08.530698 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:08.530961 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:09.030865 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:09.030938 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:09.031271 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:09.531064 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:09.531145 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:09.531546 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:09.531604 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:10.030184 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:10.030265 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:10.030604 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:10.530301 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:10.530388 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:10.530737 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:11.030319 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:11.030395 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:11.030731 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:11.530208 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:11.530286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:11.530559 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:12.030254 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:12.030339 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:12.030671 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:12.030728 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:12.530216 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:12.530298 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:12.530650 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:13.030199 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:13.030289 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:13.030609 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:13.530174 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:13.530251 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:13.530593 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:14.030325 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:14.030403 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:14.030742 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:14.030815 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:14.530197 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:14.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:14.530595 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:15.030677 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:15.030769 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:15.031176 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:15.530953 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:15.531037 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:15.531372 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:16.030632 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:16.030705 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:16.031041 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:16.031095 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:16.530824 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:16.530899 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:16.531227 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:17.031078 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:17.031158 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:17.031507 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:17.530212 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:17.530279 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:17.530603 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:18.030263 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:18.030383 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:18.030909 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:18.530209 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:18.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:18.530632 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:18.530733 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:19.030297 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:19.030368 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:19.030628 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:19.530369 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:19.530456 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:19.530831 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:20.030743 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:20.030860 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:20.031293 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:20.531060 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:20.531143 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:20.531416 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:20.531464 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:21.030175 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:21.030250 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:21.030599 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:21.530297 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:21.530372 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:21.530710 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:22.030207 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:22.030288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:22.030595 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:22.530236 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:22.530323 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:22.530665 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:23.030258 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:23.030337 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:23.030699 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:23.030756 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:23.530408 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:23.530481 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:23.530771 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:24.030482 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:24.030556 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:24.030886 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:24.530220 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:24.530300 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:24.530624 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:25.030607 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:25.030684 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:25.030955 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:25.030997 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:25.530792 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:25.530868 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:25.531224 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:26.031041 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:26.031118 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:26.031467 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:26.530812 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:26.530894 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:26.531163 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:27.030944 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:27.031023 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:27.031350 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:27.031410 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:27.531121 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:27.531199 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:27.531551 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:28.030233 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:28.030315 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:28.030645 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:28.530224 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:28.530306 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:28.530690 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:29.030400 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:29.030474 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:29.030789 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:29.530188 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:29.530261 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:29.530523 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:29.530575 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:30.030527 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:30.030608 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:30.030914 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:30.530157 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:30.530235 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:30.530505 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:31.030212 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:31.030285 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:31.030567 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:31.530218 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:31.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:31.530647 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:31.530702 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:32.030406 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:32.030487 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:32.030812 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:32.530487 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:32.530568 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:32.530890 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:33.030273 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:33.030372 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:33.030696 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:33.530243 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:33.530324 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:33.530662 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:34.030960 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:34.031034 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:34.031331 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:34.031398 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:34.531153 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:34.531229 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:34.531528 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:35.031148 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:35.031227 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:35.031548 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:35.530192 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:35.530297 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:35.530620 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:36.030260 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:36.030343 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:36.030695 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:36.530255 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:36.530338 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:36.530702 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:36.530761 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:37.030424 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:37.030507 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:37.030895 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:37.530587 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:37.530660 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:37.530982 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:38.030793 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:38.030877 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:38.031209 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:38.530652 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:38.530746 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:38.531014 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:38.531064 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:39.030920 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:39.030999 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:39.031358 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:39.530863 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:39.530944 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:39.531269 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:40.033571 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:40.033646 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:40.033999 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:40.530772 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:40.530895 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:40.531207 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:40.531255 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:41.030902 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:41.030976 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:41.031290 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:41.530836 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:41.530913 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:41.531177 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:42.031028 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:42.031104 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:42.031506 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:42.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:42.530290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:42.530602 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:43.030257 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:43.030326 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:43.030592 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:43.030635 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:43.530202 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:43.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:43.530608 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:44.030263 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:44.030345 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:44.030692 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:44.530382 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:44.530453 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:44.530758 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:45.030634 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:45.030711 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:45.031045 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:45.031092 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:45.530980 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:45.531053 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:45.531405 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:46.030984 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:46.031075 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:46.031347 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:46.531101 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:46.531172 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:46.531490 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:47.030218 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:47.030298 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:47.030674 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:47.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:47.530250 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:47.530516 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:47.530556 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:48.030265 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:48.030342 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:48.030699 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:48.530227 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:48.530301 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:48.530655 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:49.030362 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:49.030433 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:49.030712 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:49.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:49.530274 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:49.530611 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:49.530667 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:50.030450 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:50.030536 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:50.030902 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:50.530566 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:50.530643 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:50.530924 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:51.030624 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:51.030697 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:51.031040 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:51.530799 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:51.530874 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:51.531195 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:51.531260 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:52.030967 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:52.031041 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:52.031382 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:52.530130 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:52.530238 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:52.530576 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:53.030280 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:53.030358 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:53.030697 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:53.530178 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:53.530255 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:53.530583 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:54.030277 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:54.030377 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:54.030696 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:54.030750 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:54.530411 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:54.530489 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:54.530806 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:55.030697 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:55.030769 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:55.031047 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:55.530468 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:55.530547 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:55.530914 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:56.030257 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:56.030332 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:56.030675 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:56.530365 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:56.530439 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:56.530709 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:56.530760 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:57.030475 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:57.030547 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:57.030868 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:57.530563 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:57.530633 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:57.530984 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:58.030672 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:58.030747 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:58.031048 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:58.530818 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:58.530891 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:58.531173 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:42:58.531217 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:42:59.030892 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:59.030968 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:59.031306 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:42:59.531057 2968376 type.go:168] "Request Body" body=""
	I1217 10:42:59.531132 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:42:59.531418 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:00.031208 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:00.031315 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:00.031840 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:00.530199 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:00.530272 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:00.530592 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:01.030181 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:01.030256 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:01.030519 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:01.030564 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:01.530209 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:01.530280 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:01.530610 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:02.030330 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:02.030414 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:02.030762 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:02.530189 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:02.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:02.530545 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:03.030260 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:03.030353 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:03.030693 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:03.030751 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:03.530210 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:03.530286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:03.530616 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:04.030191 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:04.030264 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:04.030560 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:04.530232 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:04.530306 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:04.530651 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:05.030671 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:05.030756 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:05.031092 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:05.031143 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:05.530868 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:05.530936 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:05.531271 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:06.031105 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:06.031190 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:06.031557 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:06.530208 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:06.530279 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:06.530617 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:07.030293 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:07.030367 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:07.030644 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:07.530306 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:07.530387 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:07.530723 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:07.530782 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:08.030495 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:08.030574 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:08.030934 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:08.530198 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:08.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:08.530588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:09.030617 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:09.030710 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:09.031007 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:09.530235 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:09.530313 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:09.530669 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:10.030528 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:10.030602 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:10.030907 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:10.030956 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:10.530627 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:10.530703 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:10.531097 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:11.030974 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:11.031061 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:11.031452 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:11.530139 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:11.530212 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:11.530499 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:12.030241 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:12.030318 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:12.030668 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:12.530367 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:12.530446 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:12.530785 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:12.530838 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:13.030512 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:13.030590 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:13.030926 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:13.530231 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:13.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:13.530675 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:14.030433 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:14.030532 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:14.030898 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:14.530191 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:14.530265 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:14.530525 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:15.030561 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:15.030642 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:15.031035 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:15.031108 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:15.530771 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:15.530851 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:15.531186 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:16.030941 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:16.031058 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:16.031367 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:16.531127 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:16.531204 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:16.531551 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:17.030268 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:17.030359 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:17.030730 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:17.530432 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:17.530503 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:17.530762 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:17.530802 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:18.030287 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:18.030373 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:18.030726 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:18.530417 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:18.530491 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:18.530823 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:19.030615 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:19.030686 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:19.030957 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:19.530751 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:19.530822 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:19.531145 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:19.531219 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:20.030996 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:20.031081 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:20.031466 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:20.530134 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:20.530218 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:20.530480 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:21.030224 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:21.030315 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:21.030716 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:21.530405 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:21.530495 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:21.530849 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:22.030215 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:22.030290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:22.030563 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:22.030612 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:22.530260 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:22.530336 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:22.530660 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:23.030248 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:23.030322 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:23.030668 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:23.530351 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:23.530432 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:23.530727 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:24.030221 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:24.030298 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:24.030639 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:24.030700 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:24.530199 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:24.530274 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:24.530592 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:25.030528 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:25.030601 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:25.030897 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:25.530568 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:25.530644 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:25.531019 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:26.030843 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:26.030932 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:26.031265 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:26.031321 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:26.531018 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:26.531091 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:26.531351 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:27.031180 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:27.031252 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:27.031581 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:27.530252 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:27.530331 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:27.530667 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:28.030211 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:28.030284 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:28.030564 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:28.530284 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:28.530361 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:28.530659 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:28.530707 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:29.030650 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:29.030721 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:29.031052 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:29.530749 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:29.530823 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:29.531149 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:30.031046 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:30.031137 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:30.031519 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:30.530216 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:30.530313 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:30.530684 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:30.530743 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:31.030206 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:31.030288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:31.030560 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:31.530234 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:31.530321 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:31.530704 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:32.030432 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:32.030511 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:32.030861 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:32.530557 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:32.530633 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:32.530986 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:32.531075 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:33.030858 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:33.030935 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:33.031277 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:33.531109 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:33.531187 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:33.531577 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:34.030306 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:34.030382 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:34.030753 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:34.530223 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:34.530319 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:34.530708 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:35.030567 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:35.030667 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:35.031054 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:35.031114 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:35.530355 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:35.530425 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:35.530748 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:36.030259 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:36.030360 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:36.030744 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:36.530439 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:36.530514 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:36.530839 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:37.030151 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:37.030234 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:37.030595 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:37.530363 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:37.530442 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:37.530778 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:37.530853 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:38.030262 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:38.030348 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:38.030702 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:38.530390 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:38.530461 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:38.530733 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:39.030687 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:39.030760 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:39.031111 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:39.530923 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:39.531001 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:39.531339 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:39.531397 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:40.030955 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:40.031030 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:40.031319 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:40.531063 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:40.531139 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:40.531495 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:41.031163 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:41.031238 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:41.031591 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:41.530249 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:41.530323 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:41.530587 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:42.030362 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:42.030453 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:42.030854 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:42.030924 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:42.530577 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:42.530657 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:42.531023 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:43.030790 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:43.030866 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:43.031190 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:43.530930 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:43.531021 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:43.531357 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:44.031028 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:44.031107 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:44.031450 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:44.031512 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:44.530164 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:44.530233 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:44.530544 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:45.031170 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:45.031261 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:45.031590 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:45.530211 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:45.530293 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:45.530682 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:46.030354 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:46.030422 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:46.030698 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:46.530232 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:46.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:46.530634 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:46.530696 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:47.030366 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:47.030444 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:47.030742 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:47.530404 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:47.530478 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:47.530752 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:48.030496 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:48.030575 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:48.030882 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:48.530214 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:48.530292 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:48.530633 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:49.030376 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:49.030444 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:49.030775 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:49.030832 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:49.530474 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:49.530545 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:49.530877 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:50.030914 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:50.030991 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:50.031360 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:50.531113 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:50.531187 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:50.531458 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:51.030169 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:51.030240 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:51.030588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:51.530249 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:51.530328 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:51.530647 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:51.530704 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:52.030197 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:52.030269 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:52.030691 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:52.530433 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:52.530506 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:52.530821 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:53.030494 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:53.030577 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:53.030964 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:53.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:53.530257 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:53.530535 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:54.030277 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:54.030388 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:54.030914 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:54.030974 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:54.530212 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:54.530304 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:54.530629 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:55.030620 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:55.030692 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:55.030975 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:55.530238 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:55.530313 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:55.530627 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:56.030306 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:56.030401 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:56.030753 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:56.530440 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:56.530509 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:56.530781 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:56.530820 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:57.030493 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:57.030591 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:57.030923 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:57.530749 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:57.530825 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:57.531153 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:58.030917 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:58.030996 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:58.031309 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:58.530552 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:58.530658 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:58.531261 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:43:58.531332 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:43:59.031089 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:59.031172 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:59.031521 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:43:59.530216 2968376 type.go:168] "Request Body" body=""
	I1217 10:43:59.530308 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:43:59.530593 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:00.030748 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:00.030831 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:00.031142 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:00.530904 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:00.530989 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:00.531375 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:00.531435 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:01.030988 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:01.031055 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:01.031330 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:01.531137 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:01.531217 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:01.531543 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:02.030239 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:02.030312 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:02.030660 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:02.530197 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:02.530277 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:02.530553 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:03.030250 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:03.030323 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:03.030669 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:03.030723 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:03.530375 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:03.530452 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:03.530800 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:04.030214 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:04.030295 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:04.030627 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:04.530317 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:04.530420 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:04.530765 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:05.030596 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:05.030677 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:05.031020 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:05.031084 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:05.530367 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:05.530441 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:05.530720 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:06.030267 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:06.030359 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:06.030720 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:06.530416 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:06.530489 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:06.530819 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:07.030188 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:07.030264 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:07.030539 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:07.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:07.530283 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:07.530594 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:07.530643 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:08.030372 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:08.030476 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:08.030889 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:08.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:08.530253 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:08.530521 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:09.031107 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:09.031182 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:09.031487 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:09.530173 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:09.530246 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:09.530588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:10.030187 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:10.030261 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:10.030583 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:10.030632 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:10.530212 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:10.530285 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:10.530616 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:11.030321 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:11.030401 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:11.030717 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:11.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:11.530260 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:11.530567 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:12.030265 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:12.030345 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:12.030692 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:12.030750 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:12.530410 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:12.530491 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:12.530831 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:13.030508 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:13.030583 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:13.030845 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:13.530215 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:13.530293 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:13.530600 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:14.030276 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:14.030355 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:14.030683 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:14.530187 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:14.530269 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:14.530570 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:14.530619 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:15.030634 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:15.030728 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:15.031132 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:15.530902 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:15.530978 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:15.531320 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:16.031133 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:16.031225 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:16.031608 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:16.530217 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:16.530290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:16.530631 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:16.530691 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:17.030347 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:17.030434 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:17.030771 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:17.530190 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:17.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:17.530547 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:18.030304 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:18.030396 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:18.030847 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:18.530228 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:18.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:18.530658 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:19.030405 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:19.030487 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:19.030753 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:19.030793 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:19.530210 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:19.530287 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:19.530630 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:20.030471 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:20.030552 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:20.030904 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:20.530566 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:20.530649 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:20.530928 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:21.030264 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:21.030341 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:21.030695 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:21.530387 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:21.530465 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:21.530798 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:21.530854 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:22.030489 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:22.030560 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:22.030836 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:22.530229 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:22.530311 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:22.530651 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:23.030359 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:23.030435 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:23.030790 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:23.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:23.530257 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:23.530538 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:24.030285 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:24.030362 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:24.030720 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:24.030784 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:24.530459 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:24.530561 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:24.530890 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:25.030748 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:25.030819 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:25.031092 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:25.530805 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:25.530886 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:25.531202 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:26.030981 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:26.031066 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:26.031447 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:26.031506 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:26.530164 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:26.530240 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:26.530509 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:27.030231 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:27.030322 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:27.030709 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:27.530233 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:27.530309 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:27.530655 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:28.030347 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:28.030421 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:28.030710 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:28.530223 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:28.530325 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:28.530633 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:28.530685 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:29.030667 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:29.030745 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:29.031082 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:29.530788 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:29.530865 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:29.531130 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:30.031110 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:30.031196 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:30.031505 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:30.530206 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:30.530283 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:30.530647 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:30.530712 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:31.030226 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:31.030304 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:31.030570 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:31.530239 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:31.530328 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:31.530675 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:32.030374 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:32.030452 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:32.030786 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:32.530472 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:32.530546 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:32.530828 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:32.530875 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:33.030256 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:33.030344 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:33.030721 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:33.530199 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:33.530277 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:33.530635 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:34.030369 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:34.030448 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:34.030741 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:34.530447 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:34.530523 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:34.530886 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:34.530946 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:35.030720 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:35.030802 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:35.031141 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:35.530926 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:35.530998 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:35.531261 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:36.031086 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:36.031162 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:36.031504 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:36.530199 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:36.530272 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:36.530613 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:37.030185 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:37.030264 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:37.030544 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:37.030595 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:37.530230 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:37.530306 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:37.530649 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:38.030265 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:38.030343 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:38.030718 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:38.530403 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:38.530475 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:38.530749 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:39.030746 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:39.030820 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:39.031148 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:39.031206 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:39.530917 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:39.530990 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:39.531311 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:40.031557 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:40.031645 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:40.032005 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:40.530747 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:40.530827 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:40.531128 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:41.030902 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:41.030984 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:41.031306 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:41.031363 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:41.531112 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:41.531189 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:41.531510 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:42.030251 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:42.030333 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:42.030713 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:42.530283 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:42.530359 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:42.530684 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:43.030191 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:43.030262 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:43.030522 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:43.530207 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:43.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:43.530656 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:43.530713 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:44.030378 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:44.030455 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:44.030782 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:44.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:44.530264 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:44.530529 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:45.030577 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:45.030660 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:45.030993 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:45.530696 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:45.530777 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:45.531097 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:45.531153 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:46.030857 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:46.030933 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:46.031262 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:46.530790 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:46.530867 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:46.531226 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:47.031053 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:47.031133 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:47.031466 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:47.530800 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:47.530875 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:47.531148 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:47.531197 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:48.030951 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:48.031025 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:48.031389 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:48.531201 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:48.531290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:48.531669 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:49.030366 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:49.030437 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:49.030705 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:49.530219 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:49.530300 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:49.530631 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:50.030617 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:50.030700 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:50.031089 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:50.031150 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:50.530889 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:50.530962 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:50.531299 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:51.031099 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:51.031173 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:51.031503 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:51.530225 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:51.530303 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:51.530635 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:52.030871 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:52.030945 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:52.031227 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:52.031267 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:52.531049 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:52.531125 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:52.531452 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:53.030183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:53.030272 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:53.030616 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:53.530338 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:53.530458 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:53.530734 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:54.030429 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:54.030506 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:54.030853 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:54.530381 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:54.530458 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:54.530812 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:54.530872 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:55.030902 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:55.030979 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:55.031278 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:55.531074 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:55.531160 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:55.531517 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:56.030253 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:56.030336 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:56.030686 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:56.530868 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:56.530948 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:56.531219 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:56.531259 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:57.031079 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:57.031159 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:57.031538 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:57.530232 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:57.530305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:57.530630 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:58.030200 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:58.030274 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:58.030548 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:58.530218 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:58.530297 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:58.530631 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:44:59.030407 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:59.030481 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:59.030824 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:44:59.030888 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:44:59.530183 2968376 type.go:168] "Request Body" body=""
	I1217 10:44:59.530255 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:44:59.530525 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:00.030580 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:00.030665 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:00.031061 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:00.536031 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:00.536130 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:00.536497 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:01.030384 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:01.030461 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:01.030805 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:01.530510 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:01.530586 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:01.531038 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:01.531093 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:02.030519 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:02.030596 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:02.030885 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:02.530601 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:02.530679 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:02.531024 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:03.030770 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:03.030845 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:03.031172 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:03.530923 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:03.531000 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:03.531348 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:03.531399 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:04.031188 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:04.031267 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:04.031578 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:04.530221 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:04.530301 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:04.530665 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:05.030434 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:05.030511 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:05.030794 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:05.530486 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:05.530562 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:05.530936 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:06.030547 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:06.030629 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:06.031023 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:06.031086 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:06.530803 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:06.530879 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:06.531191 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:07.031034 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:07.031122 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:07.031472 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:07.530868 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:07.530945 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:07.531250 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:08.031006 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:08.031085 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:08.031378 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:08.031422 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:08.530162 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:08.530246 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:08.530602 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:09.030388 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:09.030461 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:09.030757 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:09.530206 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:09.530271 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:09.530546 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:10.031113 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:10.031190 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:10.031553 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:10.031610 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:10.530237 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:10.530307 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:10.530634 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:11.030979 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:11.031054 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:11.031384 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:11.531138 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:11.531212 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:11.531564 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:12.030178 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:12.030257 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:12.030588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:12.530188 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:12.530255 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:12.530534 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:12.530573 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:13.030280 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:13.030360 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:13.030766 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:13.530219 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:13.530304 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:13.530671 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:14.030210 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:14.030285 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:14.030552 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:14.530221 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:14.530293 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:14.530608 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:14.530657 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:15.030494 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:15.030575 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:15.030910 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:15.530437 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:15.530554 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:15.530900 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:16.030216 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:16.030305 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:16.030644 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:16.530338 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:16.530413 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:16.530783 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:16.530841 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:17.030494 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:17.030561 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:17.030881 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:17.530218 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:17.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:17.530621 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:18.030226 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:18.030315 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:18.030655 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:18.530196 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:18.530278 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:18.530553 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:19.031062 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:19.031145 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:19.031472 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:19.031531 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:19.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:19.530282 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:19.530610 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:20.030544 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:20.030624 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:20.030925 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:20.530202 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:20.530275 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:20.530644 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:21.030361 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:21.030463 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:21.030812 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:21.530486 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:21.530558 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:21.530871 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:21.530921 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:22.030256 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:22.030338 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:22.030680 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:22.530233 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:22.530313 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:22.530661 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:23.030227 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:23.030300 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:23.030564 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:23.530250 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:23.530342 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:23.530762 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:24.030398 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:24.030590 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:24.031036 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:24.031106 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:24.530825 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:24.530892 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:24.531165 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:25.031145 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:25.031231 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:25.031590 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:25.530287 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:25.530385 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:25.530787 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:26.030475 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:26.030551 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:26.030935 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:26.530656 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:26.530743 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:26.531109 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:26.531165 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:27.030982 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:27.031070 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:27.031412 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:27.530790 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:27.530858 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:27.531125 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:28.030995 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:28.031074 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:28.031452 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:28.530173 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:28.530254 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:28.530600 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:29.030339 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:29.030432 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:29.030724 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:29.030767 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:29.530501 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:29.530583 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:29.530943 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:30.030872 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:30.030956 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:30.031277 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:30.531026 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:30.531096 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:30.531388 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:31.031173 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:31.031248 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:31.031592 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:31.031655 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:31.530219 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:31.530290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:31.530619 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:32.030304 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:32.030380 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:32.030665 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:32.530207 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:32.530286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:32.530634 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:33.030347 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:33.030429 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:33.030767 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:33.530186 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:33.530259 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:33.530528 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:33.530569 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:34.030234 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:34.030320 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:34.030648 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:34.530224 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:34.530303 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:34.530668 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:35.030514 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:35.030598 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:35.030879 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:35.530539 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:35.530621 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:35.530944 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:35.530999 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:36.030792 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:36.030868 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:36.031197 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:36.530952 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:36.531027 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:36.531293 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:37.031128 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:37.031222 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:37.031596 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:37.530210 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:37.530284 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:37.530618 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:38.030192 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:38.030278 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:38.030552 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:38.030630 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:38.530218 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:38.530293 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:38.530633 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:39.030659 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:39.030738 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:39.031056 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:39.530839 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:39.530914 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:39.531181 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:40.031117 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:40.031198 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:40.031558 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:40.031631 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:40.530210 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:40.530292 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:40.530625 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:41.030178 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:41.030256 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:41.030535 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:41.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:41.530290 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:41.530641 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:42.030366 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:42.030455 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:42.030891 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:42.530441 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:42.530519 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:42.530792 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:42.530833 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:43.030494 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:43.030585 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:43.030904 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:43.530228 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:43.530306 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:43.530630 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:44.030307 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:44.030381 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:44.030707 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:44.530227 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:44.530296 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:44.530588 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:45.031339 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:45.031427 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:45.031745 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:45.031809 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:45.530203 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:45.530276 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:45.530545 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:46.030239 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:46.030321 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:46.030687 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:46.530366 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:46.530439 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:46.530762 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:47.030423 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:47.030519 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:47.030809 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:47.530502 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:47.530580 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:47.530914 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:47.530970 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:48.030658 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:48.030731 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:48.031047 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:48.530357 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:48.530426 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:48.530764 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:49.030805 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:49.030882 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:49.031204 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:49.530982 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:49.531053 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:49.531372 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:49.531427 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:50.031030 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:50.031115 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:50.031532 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:50.530205 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:50.530299 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:50.530623 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:51.030207 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:51.030286 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:51.030614 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:51.530323 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:51.530401 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:51.530711 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:52.030254 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:52.030329 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:52.030627 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:52.030687 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:52.530213 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:52.530288 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:52.530659 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:53.030195 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:53.030278 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:53.030640 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:53.530268 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:53.530359 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:53.530765 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:54.030501 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:54.030589 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:54.030906 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:54.030956 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:54.530370 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:54.530458 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:54.530775 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:55.030784 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:55.030866 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:55.031248 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:55.531028 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:55.531111 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:55.531412 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:56.031149 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:56.031232 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:56.031533 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:56.031587 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:56.530201 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:56.530282 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:56.530600 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:57.030260 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:57.030338 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:57.030673 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:57.530209 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:57.530285 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:57.530565 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:58.030268 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:58.030347 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:58.030688 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:58.530418 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:58.530493 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:58.530876 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 10:45:58.530935 2968376 node_ready.go:55] error getting node "functional-232588" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232588": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 10:45:59.030221 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:59.030349 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:59.030710 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:45:59.530411 2968376 type.go:168] "Request Body" body=""
	I1217 10:45:59.530486 2968376 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232588" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 10:45:59.530845 2968376 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 10:46:00.030785 2968376 type.go:168] "Request Body" body=""
	I1217 10:46:00.030868 2968376 node_ready.go:38] duration metric: took 6m0.00085226s for node "functional-232588" to be "Ready" ...
	I1217 10:46:00.039967 2968376 out.go:203] 
	W1217 10:46:00.043066 2968376 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 10:46:00.043095 2968376 out.go:285] * 
	W1217 10:46:00.047185 2968376 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 10:46:00.056487 2968376 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 10:46:07 functional-232588 containerd[5229]: time="2025-12-17T10:46:07.708521582Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 10:46:08 functional-232588 containerd[5229]: time="2025-12-17T10:46:08.757378641Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 17 10:46:08 functional-232588 containerd[5229]: time="2025-12-17T10:46:08.760264252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 17 10:46:08 functional-232588 containerd[5229]: time="2025-12-17T10:46:08.768823414Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 10:46:08 functional-232588 containerd[5229]: time="2025-12-17T10:46:08.769154590Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 10:46:09 functional-232588 containerd[5229]: time="2025-12-17T10:46:09.719423863Z" level=info msg="No images store for sha256:e51c5bc238d591cfa792477ad36236d6d751433afed0e22641b208b8c42c89b3"
	Dec 17 10:46:09 functional-232588 containerd[5229]: time="2025-12-17T10:46:09.721523450Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-232588\""
	Dec 17 10:46:09 functional-232588 containerd[5229]: time="2025-12-17T10:46:09.728263412Z" level=info msg="ImageCreate event name:\"sha256:139b28e7c45f6120a651876f7db60c8dc8c2da89658d2cb729b8871bf45e8e9c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 10:46:09 functional-232588 containerd[5229]: time="2025-12-17T10:46:09.728732784Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-232588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 10:46:10 functional-232588 containerd[5229]: time="2025-12-17T10:46:10.564291348Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 17 10:46:10 functional-232588 containerd[5229]: time="2025-12-17T10:46:10.566764136Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 17 10:46:10 functional-232588 containerd[5229]: time="2025-12-17T10:46:10.568714737Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 17 10:46:10 functional-232588 containerd[5229]: time="2025-12-17T10:46:10.581031489Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 17 10:46:11 functional-232588 containerd[5229]: time="2025-12-17T10:46:11.638544848Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 17 10:46:11 functional-232588 containerd[5229]: time="2025-12-17T10:46:11.640682209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 17 10:46:11 functional-232588 containerd[5229]: time="2025-12-17T10:46:11.647896951Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 10:46:11 functional-232588 containerd[5229]: time="2025-12-17T10:46:11.648373953Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 10:46:11 functional-232588 containerd[5229]: time="2025-12-17T10:46:11.669869134Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 17 10:46:11 functional-232588 containerd[5229]: time="2025-12-17T10:46:11.672204243Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 17 10:46:11 functional-232588 containerd[5229]: time="2025-12-17T10:46:11.674145441Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 17 10:46:11 functional-232588 containerd[5229]: time="2025-12-17T10:46:11.681979705Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 17 10:46:11 functional-232588 containerd[5229]: time="2025-12-17T10:46:11.815453253Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 17 10:46:11 functional-232588 containerd[5229]: time="2025-12-17T10:46:11.817833924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 17 10:46:11 functional-232588 containerd[5229]: time="2025-12-17T10:46:11.824862618Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 10:46:11 functional-232588 containerd[5229]: time="2025-12-17T10:46:11.825333450Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:46:15.823674    9368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:46:15.824527    9368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:46:15.826140    9368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:46:15.826448    9368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:46:15.827960    9368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +26.532481] overlayfs: idmapped layers are currently not supported
	[Dec17 09:26] overlayfs: idmapped layers are currently not supported
	[Dec17 09:27] overlayfs: idmapped layers are currently not supported
	[Dec17 09:29] overlayfs: idmapped layers are currently not supported
	[Dec17 09:31] overlayfs: idmapped layers are currently not supported
	[Dec17 09:41] overlayfs: idmapped layers are currently not supported
	[Dec17 09:43] overlayfs: idmapped layers are currently not supported
	[Dec17 09:44] overlayfs: idmapped layers are currently not supported
	[  +5.066669] overlayfs: idmapped layers are currently not supported
	[ +38.827173] overlayfs: idmapped layers are currently not supported
	[Dec17 09:45] overlayfs: idmapped layers are currently not supported
	[Dec17 09:46] overlayfs: idmapped layers are currently not supported
	[Dec17 09:48] overlayfs: idmapped layers are currently not supported
	[  +5.468161] overlayfs: idmapped layers are currently not supported
	[Dec17 09:49] overlayfs: idmapped layers are currently not supported
	[  +4.263444] overlayfs: idmapped layers are currently not supported
	[Dec17 09:50] overlayfs: idmapped layers are currently not supported
	[Dec17 10:07] overlayfs: idmapped layers are currently not supported
	[Dec17 10:08] overlayfs: idmapped layers are currently not supported
	[Dec17 10:10] overlayfs: idmapped layers are currently not supported
	[Dec17 10:11] overlayfs: idmapped layers are currently not supported
	[Dec17 10:13] overlayfs: idmapped layers are currently not supported
	[Dec17 10:15] overlayfs: idmapped layers are currently not supported
	[Dec17 10:16] overlayfs: idmapped layers are currently not supported
	[Dec17 10:21] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 10:46:15 up 16:28,  0 user,  load average: 0.92, 0.39, 0.80
	Linux functional-232588 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 10:46:12 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:46:13 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 17 10:46:13 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:13 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:13 functional-232588 kubelet[9154]: E1217 10:46:13.319870    9154 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:46:13 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:46:13 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:46:13 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 17 10:46:13 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:13 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:14 functional-232588 kubelet[9244]: E1217 10:46:14.077877    9244 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:46:14 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:46:14 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:46:14 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 828.
	Dec 17 10:46:14 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:14 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:14 functional-232588 kubelet[9266]: E1217 10:46:14.812201    9266 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:46:14 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:46:14 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:46:15 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 829.
	Dec 17 10:46:15 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:15 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:46:15 functional-232588 kubelet[9295]: E1217 10:46:15.575120    9295 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:46:15 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:46:15 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588: exit status 2 (378.352438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-232588" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (2.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (733.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-232588 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1217 10:49:28.205569 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:50:43.086358 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:52:06.151508 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:54:28.205975 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:55:43.084617 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-232588 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m11.338767044s)

                                                
                                                
-- stdout --
	* [functional-232588] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-232588" primary control-plane node in "functional-232588" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000331472s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000353032s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000353032s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-232588 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m11.340125288s for "functional-232588" cluster.
I1217 10:58:28.106127 2924574 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-232588
helpers_test.go:244: (dbg) docker inspect functional-232588:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55",
	        "Created": "2025-12-17T10:31:38.417629873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2962990,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T10:31:38.484538313Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/hostname",
	        "HostsPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/hosts",
	        "LogPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55-json.log",
	        "Name": "/functional-232588",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-232588:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-232588",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55",
	                "LowerDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813-init/diff:/var/lib/docker/overlay2/aa1c3cb837db05afa9c265c464cc269fa9c11658f422c1c8858e1287ac952f12/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-232588",
	                "Source": "/var/lib/docker/volumes/functional-232588/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-232588",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-232588",
	                "name.minikube.sigs.k8s.io": "functional-232588",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cf91fdea6bf1c59282af014fad74b29d2456698ebca9b6be8c9685054b7d7df4",
	            "SandboxKey": "/var/run/docker/netns/cf91fdea6bf1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35733"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35734"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35737"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35735"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35736"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-232588": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:06:f9:5f:98:3e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf7a63e100f8f97f0cb760b53960a5eaeb1f5054bead79442486fc1d51c01ab7",
	                    "EndpointID": "a3f4d6de946fb68269c7790dce129934f895a840ec5cebbe87fc0d49cb575c44",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-232588",
	                        "f67a3fa8da99"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-232588 -n functional-232588
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232588 -n functional-232588: exit status 2 (312.75431ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                         ARGS                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-626013 image ls --format yaml --alsologtostderr                                                                                            │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ ssh     │ functional-626013 ssh pgrep buildkitd                                                                                                                 │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │                     │
	│ image   │ functional-626013 image ls --format json --alsologtostderr                                                                                            │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image   │ functional-626013 image ls --format table --alsologtostderr                                                                                           │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image   │ functional-626013 image build -t localhost/my-image:functional-626013 testdata/build --alsologtostderr                                                │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image   │ functional-626013 image ls                                                                                                                            │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ delete  │ -p functional-626013                                                                                                                                  │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ start   │ -p functional-232588 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │                     │
	│ start   │ -p functional-232588 --alsologtostderr -v=8                                                                                                           │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:39 UTC │                     │
	│ cache   │ functional-232588 cache add registry.k8s.io/pause:3.1                                                                                                 │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ functional-232588 cache add registry.k8s.io/pause:3.3                                                                                                 │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ functional-232588 cache add registry.k8s.io/pause:latest                                                                                              │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ functional-232588 cache add minikube-local-cache-test:functional-232588                                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ functional-232588 cache delete minikube-local-cache-test:functional-232588                                                                            │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ list                                                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ ssh     │ functional-232588 ssh sudo crictl images                                                                                                              │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ ssh     │ functional-232588 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                    │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ ssh     │ functional-232588 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │                     │
	│ cache   │ functional-232588 cache reload                                                                                                                        │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ ssh     │ functional-232588 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                   │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ kubectl │ functional-232588 kubectl -- --context functional-232588 get pods                                                                                     │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │                     │
	│ start   │ -p functional-232588 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                              │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 10:46:16
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 10:46:16.812860 2974151 out.go:360] Setting OutFile to fd 1 ...
	I1217 10:46:16.812963 2974151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:46:16.813007 2974151 out.go:374] Setting ErrFile to fd 2...
	I1217 10:46:16.813012 2974151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:46:16.813266 2974151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 10:46:16.813634 2974151 out.go:368] Setting JSON to false
	I1217 10:46:16.814461 2974151 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":59327,"bootTime":1765909050,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 10:46:16.814519 2974151 start.go:143] virtualization:  
	I1217 10:46:16.818066 2974151 out.go:179] * [functional-232588] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 10:46:16.822068 2974151 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 10:46:16.822151 2974151 notify.go:221] Checking for updates...
	I1217 10:46:16.828253 2974151 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 10:46:16.831316 2974151 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:46:16.834373 2974151 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 10:46:16.837375 2974151 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 10:46:16.840310 2974151 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 10:46:16.843753 2974151 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 10:46:16.843853 2974151 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 10:46:16.873076 2974151 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 10:46:16.873190 2974151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:46:16.938275 2974151 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 10:46:16.928760564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:46:16.938365 2974151 docker.go:319] overlay module found
	I1217 10:46:16.941603 2974151 out.go:179] * Using the docker driver based on existing profile
	I1217 10:46:16.944540 2974151 start.go:309] selected driver: docker
	I1217 10:46:16.944578 2974151 start.go:927] validating driver "docker" against &{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:46:16.944677 2974151 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 10:46:16.944788 2974151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:46:17.021027 2974151 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 10:46:17.010774366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:46:17.021436 2974151 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 10:46:17.021458 2974151 cni.go:84] Creating CNI manager for ""
	I1217 10:46:17.021510 2974151 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 10:46:17.021561 2974151 start.go:353] cluster config:
	{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:46:17.024793 2974151 out.go:179] * Starting "functional-232588" primary control-plane node in "functional-232588" cluster
	I1217 10:46:17.027565 2974151 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 10:46:17.030993 2974151 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 10:46:17.033790 2974151 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 10:46:17.033824 2974151 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1217 10:46:17.033833 2974151 cache.go:65] Caching tarball of preloaded images
	I1217 10:46:17.033918 2974151 preload.go:238] Found /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 10:46:17.033926 2974151 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1217 10:46:17.034031 2974151 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/config.json ...
	I1217 10:46:17.034251 2974151 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 10:46:17.058099 2974151 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 10:46:17.058112 2974151 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 10:46:17.058125 2974151 cache.go:243] Successfully downloaded all kic artifacts
	I1217 10:46:17.058155 2974151 start.go:360] acquireMachinesLock for functional-232588: {Name:mkb7828f32963a62377c74058da795e63eb677f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 10:46:17.058219 2974151 start.go:364] duration metric: took 48.59µs to acquireMachinesLock for "functional-232588"
	I1217 10:46:17.058239 2974151 start.go:96] Skipping create...Using existing machine configuration
	I1217 10:46:17.058243 2974151 fix.go:54] fixHost starting: 
	I1217 10:46:17.058504 2974151 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:46:17.079212 2974151 fix.go:112] recreateIfNeeded on functional-232588: state=Running err=<nil>
	W1217 10:46:17.079241 2974151 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 10:46:17.082582 2974151 out.go:252] * Updating the running docker "functional-232588" container ...
	I1217 10:46:17.082612 2974151 machine.go:94] provisionDockerMachine start ...
	I1217 10:46:17.082696 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:17.100077 2974151 main.go:143] libmachine: Using SSH client type: native
	I1217 10:46:17.100208 2974151 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:46:17.100214 2974151 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 10:46:17.228063 2974151 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232588
	
	I1217 10:46:17.228077 2974151 ubuntu.go:182] provisioning hostname "functional-232588"
	I1217 10:46:17.228138 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:17.245852 2974151 main.go:143] libmachine: Using SSH client type: native
	I1217 10:46:17.245963 2974151 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:46:17.245971 2974151 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-232588 && echo "functional-232588" | sudo tee /etc/hostname
	I1217 10:46:17.390208 2974151 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232588
	
	I1217 10:46:17.390287 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:17.409213 2974151 main.go:143] libmachine: Using SSH client type: native
	I1217 10:46:17.409321 2974151 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:46:17.409335 2974151 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-232588' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-232588/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-232588' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 10:46:17.545048 2974151 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 10:46:17.545065 2974151 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 10:46:17.545093 2974151 ubuntu.go:190] setting up certificates
	I1217 10:46:17.545101 2974151 provision.go:84] configureAuth start
	I1217 10:46:17.545170 2974151 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232588
	I1217 10:46:17.563036 2974151 provision.go:143] copyHostCerts
	I1217 10:46:17.563100 2974151 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 10:46:17.563107 2974151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 10:46:17.563182 2974151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 10:46:17.563277 2974151 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 10:46:17.563281 2974151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 10:46:17.563306 2974151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 10:46:17.563356 2974151 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 10:46:17.563359 2974151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 10:46:17.563381 2974151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 10:46:17.563426 2974151 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.functional-232588 san=[127.0.0.1 192.168.49.2 functional-232588 localhost minikube]
	I1217 10:46:17.716164 2974151 provision.go:177] copyRemoteCerts
	I1217 10:46:17.716219 2974151 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 10:46:17.716261 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:17.737388 2974151 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:46:17.836120 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 10:46:17.853626 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 10:46:17.870501 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 10:46:17.888326 2974151 provision.go:87] duration metric: took 343.201911ms to configureAuth
	I1217 10:46:17.888344 2974151 ubuntu.go:206] setting minikube options for container-runtime
	I1217 10:46:17.888621 2974151 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 10:46:17.888627 2974151 machine.go:97] duration metric: took 806.010876ms to provisionDockerMachine
	I1217 10:46:17.888635 2974151 start.go:293] postStartSetup for "functional-232588" (driver="docker")
	I1217 10:46:17.888646 2974151 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 10:46:17.888710 2974151 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 10:46:17.888750 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:17.905996 2974151 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:46:18.000491 2974151 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 10:46:18.012109 2974151 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 10:46:18.012146 2974151 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 10:46:18.012158 2974151 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 10:46:18.012224 2974151 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 10:46:18.012302 2974151 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 10:46:18.012378 2974151 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts -> hosts in /etc/test/nested/copy/2924574
	I1217 10:46:18.012531 2974151 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2924574
	I1217 10:46:18.021349 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 10:46:18.041286 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts --> /etc/test/nested/copy/2924574/hosts (40 bytes)
	I1217 10:46:18.060319 2974151 start.go:296] duration metric: took 171.669118ms for postStartSetup
	I1217 10:46:18.060436 2974151 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 10:46:18.060478 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:18.080470 2974151 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:46:18.173527 2974151 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 10:46:18.178353 2974151 fix.go:56] duration metric: took 1.120102504s for fixHost
	I1217 10:46:18.178370 2974151 start.go:83] releasing machines lock for "functional-232588", held for 1.120143316s
	I1217 10:46:18.178439 2974151 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232588
	I1217 10:46:18.195096 2974151 ssh_runner.go:195] Run: cat /version.json
	I1217 10:46:18.195136 2974151 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 10:46:18.195139 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:18.195194 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:18.218089 2974151 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:46:18.224561 2974151 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:46:18.312237 2974151 ssh_runner.go:195] Run: systemctl --version
	I1217 10:46:18.401982 2974151 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 10:46:18.406442 2974151 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 10:46:18.406503 2974151 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 10:46:18.414452 2974151 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 10:46:18.414475 2974151 start.go:496] detecting cgroup driver to use...
	I1217 10:46:18.414504 2974151 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 10:46:18.414555 2974151 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 10:46:18.437080 2974151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 10:46:18.453263 2974151 docker.go:218] disabling cri-docker service (if available) ...
	I1217 10:46:18.453314 2974151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 10:46:18.469891 2974151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 10:46:18.484540 2974151 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 10:46:18.608866 2974151 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 10:46:18.727258 2974151 docker.go:234] disabling docker service ...
	I1217 10:46:18.727333 2974151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 10:46:18.742532 2974151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 10:46:18.755933 2974151 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 10:46:18.876736 2974151 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 10:46:18.997189 2974151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 10:46:19.012062 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 10:46:19.033558 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 10:46:19.046193 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 10:46:19.056269 2974151 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 10:46:19.056333 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 10:46:19.066650 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 10:46:19.076242 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 10:46:19.086026 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 10:46:19.095009 2974151 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 10:46:19.103467 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 10:46:19.112970 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 10:46:19.121805 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 10:46:19.131086 2974151 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 10:46:19.139081 2974151 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 10:46:19.146487 2974151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 10:46:19.293215 2974151 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 10:46:19.434655 2974151 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 10:46:19.434715 2974151 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 10:46:19.439246 2974151 start.go:564] Will wait 60s for crictl version
	I1217 10:46:19.439314 2974151 ssh_runner.go:195] Run: which crictl
	I1217 10:46:19.442915 2974151 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 10:46:19.467445 2974151 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 10:46:19.467506 2974151 ssh_runner.go:195] Run: containerd --version
	I1217 10:46:19.489544 2974151 ssh_runner.go:195] Run: containerd --version
	I1217 10:46:19.516185 2974151 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1217 10:46:19.519114 2974151 cli_runner.go:164] Run: docker network inspect functional-232588 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 10:46:19.535732 2974151 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 10:46:19.542843 2974151 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1217 10:46:19.545647 2974151 kubeadm.go:884] updating cluster {Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 10:46:19.545821 2974151 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 10:46:19.545902 2974151 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 10:46:19.570156 2974151 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 10:46:19.570167 2974151 containerd.go:534] Images already preloaded, skipping extraction
	I1217 10:46:19.570223 2974151 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 10:46:19.598013 2974151 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 10:46:19.598025 2974151 cache_images.go:86] Images are preloaded, skipping loading
	I1217 10:46:19.598031 2974151 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1217 10:46:19.598133 2974151 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-232588 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 10:46:19.598195 2974151 ssh_runner.go:195] Run: sudo crictl info
	I1217 10:46:19.628150 2974151 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1217 10:46:19.628169 2974151 cni.go:84] Creating CNI manager for ""
	I1217 10:46:19.628176 2974151 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 10:46:19.628184 2974151 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 10:46:19.628205 2974151 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-232588 NodeName:functional-232588 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubelet
ConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 10:46:19.628313 2974151 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-232588"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 10:46:19.628380 2974151 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 10:46:19.636242 2974151 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 10:46:19.636301 2974151 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 10:46:19.643919 2974151 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1217 10:46:19.658022 2974151 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 10:46:19.670961 2974151 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2085 bytes)
	I1217 10:46:19.684065 2974151 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 10:46:19.687947 2974151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 10:46:19.796384 2974151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 10:46:20.002745 2974151 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588 for IP: 192.168.49.2
	I1217 10:46:20.002759 2974151 certs.go:195] generating shared ca certs ...
	I1217 10:46:20.002799 2974151 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:46:20.002998 2974151 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 10:46:20.003055 2974151 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 10:46:20.003062 2974151 certs.go:257] generating profile certs ...
	I1217 10:46:20.003183 2974151 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.key
	I1217 10:46:20.003236 2974151 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key.a39919a0
	I1217 10:46:20.003288 2974151 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key
	I1217 10:46:20.003444 2974151 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 10:46:20.003480 2974151 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 10:46:20.003508 2974151 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 10:46:20.003545 2974151 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 10:46:20.003577 2974151 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 10:46:20.003610 2974151 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 10:46:20.003665 2974151 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 10:46:20.004449 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 10:46:20.040127 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 10:46:20.065442 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 10:46:20.086611 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 10:46:20.107054 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 10:46:20.126007 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 10:46:20.144078 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 10:46:20.162802 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 10:46:20.181368 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 10:46:20.200073 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 10:46:20.217945 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 10:46:20.235640 2974151 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 10:46:20.248545 2974151 ssh_runner.go:195] Run: openssl version
	I1217 10:46:20.256076 2974151 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 10:46:20.263759 2974151 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 10:46:20.271126 2974151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 10:46:20.274974 2974151 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 10:46:20.275038 2974151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 10:46:20.316429 2974151 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 10:46:20.323945 2974151 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 10:46:20.331201 2974151 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 10:46:20.339536 2974151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 10:46:20.343551 2974151 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 10:46:20.343606 2974151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 10:46:20.384485 2974151 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 10:46:20.391694 2974151 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:46:20.399044 2974151 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 10:46:20.406332 2974151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:46:20.410078 2974151 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:46:20.410134 2974151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:46:20.451203 2974151 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 10:46:20.458641 2974151 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 10:46:20.462247 2974151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 10:46:20.503114 2974151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 10:46:20.544335 2974151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 10:46:20.590045 2974151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 10:46:20.630985 2974151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 10:46:20.672580 2974151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 10:46:20.713547 2974151 kubeadm.go:401] StartCluster: {Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:46:20.713638 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 10:46:20.713707 2974151 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 10:46:20.740007 2974151 cri.go:89] found id: ""
	I1217 10:46:20.740065 2974151 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 10:46:20.747914 2974151 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 10:46:20.747924 2974151 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 10:46:20.747974 2974151 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 10:46:20.757908 2974151 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 10:46:20.758430 2974151 kubeconfig.go:125] found "functional-232588" server: "https://192.168.49.2:8441"
	I1217 10:46:20.761036 2974151 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 10:46:20.769414 2974151 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 10:31:46.081162571 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 10:46:19.676908670 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1217 10:46:20.769441 2974151 kubeadm.go:1161] stopping kube-system containers ...
	I1217 10:46:20.769455 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1217 10:46:20.769528 2974151 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 10:46:20.801226 2974151 cri.go:89] found id: ""
	I1217 10:46:20.801308 2974151 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 10:46:20.820664 2974151 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 10:46:20.829373 2974151 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 17 10:35 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 17 10:35 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 17 10:35 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 17 10:35 /etc/kubernetes/scheduler.conf
	
	I1217 10:46:20.829433 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 10:46:20.837325 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 10:46:20.845308 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 10:46:20.845363 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 10:46:20.853199 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 10:46:20.860841 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 10:46:20.860897 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 10:46:20.868346 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 10:46:20.876151 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 10:46:20.876211 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 10:46:20.883945 2974151 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 10:46:20.892018 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 10:46:20.938748 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 10:46:22.162130 2974151 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.22335875s)
	I1217 10:46:22.162221 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 10:46:22.359829 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 10:46:22.415930 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 10:46:22.468185 2974151 api_server.go:52] waiting for apiserver process to appear ...
	I1217 10:46:22.468265 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:22.969146 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:23.468479 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:23.968514 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:24.468479 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:24.969355 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:25.469200 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:25.969018 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:26.468818 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:26.969109 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:27.468378 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:27.969311 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:28.469065 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:28.969101 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:29.468403 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:29.968443 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:30.468499 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:30.968729 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:31.468355 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:31.968496 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:32.468560 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:32.968509 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:33.469088 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:33.969160 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:34.468498 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:34.968497 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:35.468823 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:35.968410 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:36.469195 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:36.969040 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:37.469267 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:37.969122 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:38.469239 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:38.969263 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:39.469144 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:39.969429 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:40.468520 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:40.968559 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:41.469268 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:41.968407 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:42.469044 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:42.969148 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:43.468399 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:43.968478 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:44.468402 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:44.969211 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:45.469415 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:45.968355 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:46.468347 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:46.969243 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:47.468650 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:47.969320 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:48.469355 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:48.969346 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:49.469299 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:49.968561 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:50.469414 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:50.968570 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:51.468468 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:51.969383 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:52.468402 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:52.969191 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:53.469310 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:53.969186 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:54.469057 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:54.968491 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:55.469204 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:55.968499 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:56.468579 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:56.968537 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:57.468523 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:57.968481 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:58.468521 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:58.969320 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:59.469211 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:59.968498 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:00.468441 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:00.969123 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:01.468956 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:01.969376 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:02.468446 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:02.969237 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:03.468449 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:03.969079 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:04.469054 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:04.968610 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:05.468502 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:05.968334 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:06.469020 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:06.969077 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:07.469052 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:07.968481 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:08.469171 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:08.968586 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:09.469235 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:09.968478 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:10.469198 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:10.968403 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:11.469192 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:11.969439 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:12.469344 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:12.969231 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:13.469196 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:13.969169 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:14.469322 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:14.969138 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:15.469310 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:15.969247 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:16.469080 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:16.968869 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:17.468522 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:17.968551 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:18.468369 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:18.969356 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:19.469354 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:19.969205 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:20.469085 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:20.968997 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:21.468670 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:21.969358 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:22.469259 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:22.469337 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:22.493873 2974151 cri.go:89] found id: ""
	I1217 10:47:22.493887 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.493894 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:22.493901 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:22.493960 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:22.522462 2974151 cri.go:89] found id: ""
	I1217 10:47:22.522476 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.522483 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:22.522488 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:22.522547 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:22.550878 2974151 cri.go:89] found id: ""
	I1217 10:47:22.550892 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.550899 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:22.550904 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:22.550964 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:22.576167 2974151 cri.go:89] found id: ""
	I1217 10:47:22.576181 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.576188 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:22.576193 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:22.576253 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:22.600591 2974151 cri.go:89] found id: ""
	I1217 10:47:22.600605 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.600612 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:22.600617 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:22.600673 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:22.624978 2974151 cri.go:89] found id: ""
	I1217 10:47:22.624992 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.624999 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:22.625005 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:22.625062 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:22.649387 2974151 cri.go:89] found id: ""
	I1217 10:47:22.649401 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.649408 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:22.649415 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:22.649427 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:22.666544 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:22.666563 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:22.733635 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:22.724930   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.725595   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.727257   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.727857   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.729508   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:22.724930   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.725595   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.727257   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.727857   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.729508   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:22.733647 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:22.733658 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:22.802118 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:22.802139 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:22.842645 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:22.842661 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:25.403296 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:25.413370 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:25.413431 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:25.437778 2974151 cri.go:89] found id: ""
	I1217 10:47:25.437792 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.437799 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:25.437804 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:25.437864 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:25.466932 2974151 cri.go:89] found id: ""
	I1217 10:47:25.466946 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.466953 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:25.466959 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:25.467017 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:25.495887 2974151 cri.go:89] found id: ""
	I1217 10:47:25.495901 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.495907 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:25.495912 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:25.495971 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:25.521061 2974151 cri.go:89] found id: ""
	I1217 10:47:25.521075 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.521082 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:25.521087 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:25.521146 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:25.550884 2974151 cri.go:89] found id: ""
	I1217 10:47:25.550898 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.550905 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:25.550910 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:25.550967 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:25.576130 2974151 cri.go:89] found id: ""
	I1217 10:47:25.576145 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.576151 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:25.576156 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:25.576224 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:25.600903 2974151 cri.go:89] found id: ""
	I1217 10:47:25.600916 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.600923 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:25.600931 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:25.600941 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:25.633359 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:25.633375 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:25.689492 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:25.689512 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:25.706643 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:25.706661 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:25.788195 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:25.780730   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.781147   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.782587   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.782886   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.784365   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:25.780730   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.781147   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.782587   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.782886   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.784365   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:25.788207 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:25.788218 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:28.357987 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:28.368310 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:28.368371 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:28.393766 2974151 cri.go:89] found id: ""
	I1217 10:47:28.393789 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.393797 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:28.393803 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:28.393876 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:28.418225 2974151 cri.go:89] found id: ""
	I1217 10:47:28.418240 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.418247 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:28.418253 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:28.418312 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:28.444064 2974151 cri.go:89] found id: ""
	I1217 10:47:28.444083 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.444091 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:28.444096 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:28.444157 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:28.469125 2974151 cri.go:89] found id: ""
	I1217 10:47:28.469139 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.469146 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:28.469152 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:28.469210 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:28.494598 2974151 cri.go:89] found id: ""
	I1217 10:47:28.494614 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.494621 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:28.494627 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:28.494689 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:28.529767 2974151 cri.go:89] found id: ""
	I1217 10:47:28.529781 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.529788 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:28.529793 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:28.529851 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:28.554626 2974151 cri.go:89] found id: ""
	I1217 10:47:28.554640 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.554653 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:28.554661 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:28.554671 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:28.610665 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:28.610693 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:28.627829 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:28.627846 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:28.694227 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:28.685909   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.686688   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.688310   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.688904   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.690427   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:28.685909   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.686688   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.688310   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.688904   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.690427   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:28.694247 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:28.694257 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:28.761980 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:28.761999 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:31.299127 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:31.309358 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:31.309418 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:31.334436 2974151 cri.go:89] found id: ""
	I1217 10:47:31.334450 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.334458 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:31.334463 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:31.334530 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:31.359180 2974151 cri.go:89] found id: ""
	I1217 10:47:31.359195 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.359202 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:31.359207 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:31.359264 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:31.386298 2974151 cri.go:89] found id: ""
	I1217 10:47:31.386312 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.386319 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:31.386324 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:31.386385 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:31.414747 2974151 cri.go:89] found id: ""
	I1217 10:47:31.414762 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.414769 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:31.414774 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:31.414835 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:31.439979 2974151 cri.go:89] found id: ""
	I1217 10:47:31.439993 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.439999 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:31.440005 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:31.440061 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:31.465613 2974151 cri.go:89] found id: ""
	I1217 10:47:31.465628 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.465635 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:31.465641 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:31.465698 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:31.495303 2974151 cri.go:89] found id: ""
	I1217 10:47:31.495317 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.495324 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:31.495332 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:31.495347 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:31.551359 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:31.551380 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:31.568339 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:31.568356 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:31.631156 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:31.622217   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.623260   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.624240   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.625368   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.626068   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:31.622217   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.623260   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.624240   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.625368   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.626068   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:31.631168 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:31.631179 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:31.694344 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:31.694364 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:34.224306 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:34.234549 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:34.234609 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:34.262893 2974151 cri.go:89] found id: ""
	I1217 10:47:34.262907 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.262913 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:34.262919 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:34.262974 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:34.287865 2974151 cri.go:89] found id: ""
	I1217 10:47:34.287880 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.287887 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:34.287892 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:34.287971 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:34.314130 2974151 cri.go:89] found id: ""
	I1217 10:47:34.314144 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.314151 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:34.314157 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:34.314213 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:34.338080 2974151 cri.go:89] found id: ""
	I1217 10:47:34.338094 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.338101 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:34.338106 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:34.338167 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:34.366907 2974151 cri.go:89] found id: ""
	I1217 10:47:34.366922 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.366929 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:34.366934 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:34.367005 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:34.394628 2974151 cri.go:89] found id: ""
	I1217 10:47:34.394642 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.394650 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:34.394655 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:34.394718 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:34.422575 2974151 cri.go:89] found id: ""
	I1217 10:47:34.422590 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.422597 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:34.422605 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:34.422615 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:34.478427 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:34.478445 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:34.495399 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:34.495416 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:34.567591 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:34.559443   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.560218   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.561959   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.562370   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.563927   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:34.559443   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.560218   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.561959   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.562370   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.563927   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:34.567600 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:34.567611 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:34.629987 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:34.630008 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:37.172568 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:37.185167 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:37.185227 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:37.209648 2974151 cri.go:89] found id: ""
	I1217 10:47:37.209662 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.209669 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:37.209674 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:37.209734 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:37.239202 2974151 cri.go:89] found id: ""
	I1217 10:47:37.239216 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.239223 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:37.239229 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:37.239287 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:37.264777 2974151 cri.go:89] found id: ""
	I1217 10:47:37.264791 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.264798 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:37.264803 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:37.264870 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:37.290195 2974151 cri.go:89] found id: ""
	I1217 10:47:37.290209 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.290216 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:37.290221 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:37.290277 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:37.315019 2974151 cri.go:89] found id: ""
	I1217 10:47:37.315033 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.315040 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:37.315046 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:37.315116 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:37.339319 2974151 cri.go:89] found id: ""
	I1217 10:47:37.339333 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.339340 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:37.339345 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:37.339407 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:37.365996 2974151 cri.go:89] found id: ""
	I1217 10:47:37.366010 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.366017 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:37.366024 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:37.366034 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:37.382805 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:37.382824 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:37.447944 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:37.439827   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.440553   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.442220   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.442682   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.444195   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:37.439827   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.440553   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.442220   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.442682   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.444195   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:37.447955 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:37.447966 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:37.510276 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:37.510298 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:37.540200 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:37.540215 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:40.105556 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:40.119775 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:40.119860 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:40.144817 2974151 cri.go:89] found id: ""
	I1217 10:47:40.144832 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.144839 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:40.144844 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:40.144908 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:40.169663 2974151 cri.go:89] found id: ""
	I1217 10:47:40.169676 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.169683 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:40.169688 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:40.169745 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:40.194821 2974151 cri.go:89] found id: ""
	I1217 10:47:40.194835 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.194842 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:40.194847 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:40.194909 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:40.222839 2974151 cri.go:89] found id: ""
	I1217 10:47:40.222853 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.222860 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:40.222866 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:40.222940 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:40.247991 2974151 cri.go:89] found id: ""
	I1217 10:47:40.248005 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.248012 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:40.248017 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:40.248075 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:40.272758 2974151 cri.go:89] found id: ""
	I1217 10:47:40.272772 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.272778 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:40.272783 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:40.272844 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:40.298276 2974151 cri.go:89] found id: ""
	I1217 10:47:40.298290 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.298297 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:40.298305 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:40.298316 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:40.314934 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:40.314950 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:40.379519 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:40.371790   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.372215   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.373688   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.374125   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.375622   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:40.371790   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.372215   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.373688   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.374125   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.375622   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:40.379532 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:40.379544 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:40.442308 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:40.442328 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:40.471269 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:40.471287 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:43.030145 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:43.043645 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:43.043715 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:43.081236 2974151 cri.go:89] found id: ""
	I1217 10:47:43.081250 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.081257 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:43.081262 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:43.081326 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:43.115370 2974151 cri.go:89] found id: ""
	I1217 10:47:43.115384 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.115390 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:43.115399 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:43.115462 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:43.140373 2974151 cri.go:89] found id: ""
	I1217 10:47:43.140387 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.140395 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:43.140400 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:43.140480 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:43.166855 2974151 cri.go:89] found id: ""
	I1217 10:47:43.166870 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.166877 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:43.166883 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:43.166941 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:43.191839 2974151 cri.go:89] found id: ""
	I1217 10:47:43.191854 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.191861 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:43.191866 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:43.191927 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:43.217632 2974151 cri.go:89] found id: ""
	I1217 10:47:43.217652 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.217659 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:43.217664 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:43.217725 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:43.242042 2974151 cri.go:89] found id: ""
	I1217 10:47:43.242056 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.242064 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:43.242071 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:43.242081 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:43.299602 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:43.299621 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:43.316995 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:43.317012 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:43.381195 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:43.373241   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.374026   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.375639   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.375964   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.377408   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:43.373241   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.374026   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.375639   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.375964   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.377408   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:43.381206 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:43.381217 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:43.443981 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:43.444003 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:45.975295 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:45.985580 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:45.985639 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:46.020412 2974151 cri.go:89] found id: ""
	I1217 10:47:46.020446 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.020454 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:46.020460 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:46.020529 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:46.056724 2974151 cri.go:89] found id: ""
	I1217 10:47:46.056739 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.056755 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:46.056762 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:46.056823 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:46.087796 2974151 cri.go:89] found id: ""
	I1217 10:47:46.087811 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.087818 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:46.087844 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:46.087924 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:46.112453 2974151 cri.go:89] found id: ""
	I1217 10:47:46.112467 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.112475 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:46.112480 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:46.112539 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:46.141019 2974151 cri.go:89] found id: ""
	I1217 10:47:46.141034 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.141041 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:46.141047 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:46.141103 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:46.165608 2974151 cri.go:89] found id: ""
	I1217 10:47:46.165621 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.165628 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:46.165634 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:46.165691 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:46.192283 2974151 cri.go:89] found id: ""
	I1217 10:47:46.192307 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.192315 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:46.192323 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:46.192335 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:46.255412 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:46.255435 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:46.287390 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:46.287406 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:46.344424 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:46.344442 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:46.361344 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:46.361361 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:46.424398 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:46.416182   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.416923   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.418495   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.418798   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.420304   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:46.416182   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.416923   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.418495   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.418798   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.420304   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:48.924647 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:48.934813 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:48.934877 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:48.959135 2974151 cri.go:89] found id: ""
	I1217 10:47:48.959159 2974151 logs.go:282] 0 containers: []
	W1217 10:47:48.959166 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:48.959172 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:48.959241 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:48.983610 2974151 cri.go:89] found id: ""
	I1217 10:47:48.983632 2974151 logs.go:282] 0 containers: []
	W1217 10:47:48.983640 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:48.983645 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:48.983714 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:49.026685 2974151 cri.go:89] found id: ""
	I1217 10:47:49.026700 2974151 logs.go:282] 0 containers: []
	W1217 10:47:49.026707 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:49.026713 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:49.026773 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:49.060861 2974151 cri.go:89] found id: ""
	I1217 10:47:49.060876 2974151 logs.go:282] 0 containers: []
	W1217 10:47:49.060883 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:49.060890 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:49.060950 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:49.090198 2974151 cri.go:89] found id: ""
	I1217 10:47:49.090213 2974151 logs.go:282] 0 containers: []
	W1217 10:47:49.090221 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:49.090226 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:49.090288 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:49.119661 2974151 cri.go:89] found id: ""
	I1217 10:47:49.119676 2974151 logs.go:282] 0 containers: []
	W1217 10:47:49.119683 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:49.119689 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:49.119812 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:49.148486 2974151 cri.go:89] found id: ""
	I1217 10:47:49.148500 2974151 logs.go:282] 0 containers: []
	W1217 10:47:49.148507 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:49.148515 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:49.148525 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:49.212250 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:49.212271 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:49.240975 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:49.240993 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:49.299733 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:49.299756 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:49.316863 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:49.316882 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:49.387132 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:49.378625   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.379410   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.381103   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.381692   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.383302   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:49.378625   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.379410   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.381103   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.381692   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.383302   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:51.888132 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:51.898751 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:51.898816 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:51.932795 2974151 cri.go:89] found id: ""
	I1217 10:47:51.932815 2974151 logs.go:282] 0 containers: []
	W1217 10:47:51.932827 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:51.932833 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:51.932896 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:51.963357 2974151 cri.go:89] found id: ""
	I1217 10:47:51.963371 2974151 logs.go:282] 0 containers: []
	W1217 10:47:51.963378 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:51.963384 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:51.963448 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:51.988757 2974151 cri.go:89] found id: ""
	I1217 10:47:51.988778 2974151 logs.go:282] 0 containers: []
	W1217 10:47:51.988785 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:51.988790 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:51.988850 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:52.028153 2974151 cri.go:89] found id: ""
	I1217 10:47:52.028167 2974151 logs.go:282] 0 containers: []
	W1217 10:47:52.028174 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:52.028180 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:52.028244 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:52.063954 2974151 cri.go:89] found id: ""
	I1217 10:47:52.063968 2974151 logs.go:282] 0 containers: []
	W1217 10:47:52.063975 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:52.063980 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:52.064038 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:52.098500 2974151 cri.go:89] found id: ""
	I1217 10:47:52.098514 2974151 logs.go:282] 0 containers: []
	W1217 10:47:52.098521 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:52.098527 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:52.098587 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:52.130345 2974151 cri.go:89] found id: ""
	I1217 10:47:52.130359 2974151 logs.go:282] 0 containers: []
	W1217 10:47:52.130366 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:52.130374 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:52.130384 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:52.189106 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:52.189126 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:52.207475 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:52.207493 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:52.271884 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:52.263636   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.264410   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.265990   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.266498   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.267978   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:52.263636   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.264410   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.265990   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.266498   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.267978   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:52.271903 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:52.271914 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:52.334484 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:52.334504 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:54.867624 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:54.877729 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:54.877789 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:54.902223 2974151 cri.go:89] found id: ""
	I1217 10:47:54.902237 2974151 logs.go:282] 0 containers: []
	W1217 10:47:54.902244 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:54.902250 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:54.902312 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:54.927795 2974151 cri.go:89] found id: ""
	I1217 10:47:54.927810 2974151 logs.go:282] 0 containers: []
	W1217 10:47:54.927817 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:54.927823 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:54.927888 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:54.954800 2974151 cri.go:89] found id: ""
	I1217 10:47:54.954816 2974151 logs.go:282] 0 containers: []
	W1217 10:47:54.954823 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:54.954829 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:54.954888 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:54.980005 2974151 cri.go:89] found id: ""
	I1217 10:47:54.980018 2974151 logs.go:282] 0 containers: []
	W1217 10:47:54.980025 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:54.980030 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:54.980093 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:55.013092 2974151 cri.go:89] found id: ""
	I1217 10:47:55.013107 2974151 logs.go:282] 0 containers: []
	W1217 10:47:55.013115 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:55.013121 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:55.013191 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:55.050531 2974151 cri.go:89] found id: ""
	I1217 10:47:55.050545 2974151 logs.go:282] 0 containers: []
	W1217 10:47:55.050552 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:55.050557 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:55.050619 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:55.090230 2974151 cri.go:89] found id: ""
	I1217 10:47:55.090245 2974151 logs.go:282] 0 containers: []
	W1217 10:47:55.090252 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:55.090260 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:55.090270 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:55.153444 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:55.153464 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:55.185504 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:55.185520 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:55.242466 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:55.242485 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:55.260631 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:55.260648 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:55.331030 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:55.322930   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.323475   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.324828   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.325446   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.327095   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:55.322930   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.323475   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.324828   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.325446   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.327095   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:57.831262 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:57.841170 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:57.841234 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:57.869513 2974151 cri.go:89] found id: ""
	I1217 10:47:57.869529 2974151 logs.go:282] 0 containers: []
	W1217 10:47:57.869536 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:57.869542 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:57.869602 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:57.898410 2974151 cri.go:89] found id: ""
	I1217 10:47:57.898424 2974151 logs.go:282] 0 containers: []
	W1217 10:47:57.898431 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:57.898437 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:57.898497 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:57.926916 2974151 cri.go:89] found id: ""
	I1217 10:47:57.926931 2974151 logs.go:282] 0 containers: []
	W1217 10:47:57.926938 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:57.926944 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:57.927008 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:57.956754 2974151 cri.go:89] found id: ""
	I1217 10:47:57.956768 2974151 logs.go:282] 0 containers: []
	W1217 10:47:57.956775 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:57.956780 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:57.956840 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:57.981614 2974151 cri.go:89] found id: ""
	I1217 10:47:57.981629 2974151 logs.go:282] 0 containers: []
	W1217 10:47:57.981636 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:57.981642 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:57.981701 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:58.021825 2974151 cri.go:89] found id: ""
	I1217 10:47:58.021839 2974151 logs.go:282] 0 containers: []
	W1217 10:47:58.021846 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:58.021852 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:58.021924 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:58.055082 2974151 cri.go:89] found id: ""
	I1217 10:47:58.055097 2974151 logs.go:282] 0 containers: []
	W1217 10:47:58.055104 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:58.055111 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:58.055120 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:58.117865 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:58.117887 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:58.136280 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:58.136297 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:58.204520 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:58.195962   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.196685   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.198489   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.199076   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.200656   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:58.195962   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.196685   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.198489   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.199076   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.200656   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:58.204540 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:58.204551 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:58.267689 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:58.267713 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:00.795803 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:00.807186 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:00.807252 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:00.833048 2974151 cri.go:89] found id: ""
	I1217 10:48:00.833062 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.833069 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:00.833074 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:00.833136 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:00.863311 2974151 cri.go:89] found id: ""
	I1217 10:48:00.863325 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.863332 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:00.863338 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:00.863398 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:00.887857 2974151 cri.go:89] found id: ""
	I1217 10:48:00.887871 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.887877 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:00.887883 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:00.887940 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:00.913735 2974151 cri.go:89] found id: ""
	I1217 10:48:00.913749 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.913756 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:00.913762 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:00.913824 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:00.938305 2974151 cri.go:89] found id: ""
	I1217 10:48:00.938319 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.938327 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:00.938333 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:00.938390 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:00.963900 2974151 cri.go:89] found id: ""
	I1217 10:48:00.963914 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.963920 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:00.963925 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:00.963985 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:00.990708 2974151 cri.go:89] found id: ""
	I1217 10:48:00.990722 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.990729 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:00.990737 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:00.990747 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:01.012006 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:01.012023 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:01.099675 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:01.089770   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.090990   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.092688   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.093302   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.095197   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:01.089770   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.090990   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.092688   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.093302   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.095197   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:01.099686 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:01.099702 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:01.164360 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:01.164381 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:01.194518 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:01.194535 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:03.752593 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:03.763233 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:03.763297 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:03.788873 2974151 cri.go:89] found id: ""
	I1217 10:48:03.788893 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.788901 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:03.788907 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:03.788968 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:03.818571 2974151 cri.go:89] found id: ""
	I1217 10:48:03.818586 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.818593 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:03.818598 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:03.818657 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:03.844383 2974151 cri.go:89] found id: ""
	I1217 10:48:03.844397 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.844405 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:03.844410 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:03.844496 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:03.869318 2974151 cri.go:89] found id: ""
	I1217 10:48:03.869333 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.869339 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:03.869345 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:03.869404 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:03.895029 2974151 cri.go:89] found id: ""
	I1217 10:48:03.895043 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.895050 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:03.895055 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:03.895113 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:03.920493 2974151 cri.go:89] found id: ""
	I1217 10:48:03.920509 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.920516 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:03.920522 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:03.920592 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:03.945885 2974151 cri.go:89] found id: ""
	I1217 10:48:03.945898 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.945905 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:03.945912 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:03.945922 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:04.003008 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:04.003033 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:04.026399 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:04.026416 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:04.107334 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:04.098549   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.099321   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.101190   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.101779   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.103317   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:04.098549   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.099321   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.101190   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.101779   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.103317   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:04.107349 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:04.107360 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:04.174915 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:04.174940 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:06.707611 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:06.718250 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:06.718313 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:06.743084 2974151 cri.go:89] found id: ""
	I1217 10:48:06.743098 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.743105 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:06.743110 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:06.743169 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:06.769923 2974151 cri.go:89] found id: ""
	I1217 10:48:06.769937 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.769945 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:06.769950 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:06.770016 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:06.798634 2974151 cri.go:89] found id: ""
	I1217 10:48:06.798648 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.798655 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:06.798660 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:06.798719 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:06.823901 2974151 cri.go:89] found id: ""
	I1217 10:48:06.823915 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.823923 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:06.823928 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:06.823990 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:06.849872 2974151 cri.go:89] found id: ""
	I1217 10:48:06.849885 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.849892 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:06.849898 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:06.849957 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:06.875558 2974151 cri.go:89] found id: ""
	I1217 10:48:06.875572 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.875580 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:06.875585 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:06.875642 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:06.901051 2974151 cri.go:89] found id: ""
	I1217 10:48:06.901065 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.901071 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:06.901079 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:06.901088 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:06.964468 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:06.964488 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:06.993527 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:06.993542 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:07.062199 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:07.062218 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:07.082316 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:07.082334 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:07.157387 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:07.148299   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.149171   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.150892   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.151645   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.153293   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:07.148299   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.149171   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.150892   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.151645   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.153293   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:09.657640 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:09.667724 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:09.667783 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:09.693919 2974151 cri.go:89] found id: ""
	I1217 10:48:09.693935 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.693941 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:09.693948 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:09.694008 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:09.722743 2974151 cri.go:89] found id: ""
	I1217 10:48:09.722758 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.722765 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:09.722770 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:09.722828 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:09.756610 2974151 cri.go:89] found id: ""
	I1217 10:48:09.756624 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.756632 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:09.756637 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:09.756693 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:09.786006 2974151 cri.go:89] found id: ""
	I1217 10:48:09.786021 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.786028 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:09.786033 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:09.786097 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:09.810865 2974151 cri.go:89] found id: ""
	I1217 10:48:09.810878 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.810885 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:09.810890 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:09.810947 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:09.838221 2974151 cri.go:89] found id: ""
	I1217 10:48:09.838235 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.838242 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:09.838247 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:09.838307 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:09.866748 2974151 cri.go:89] found id: ""
	I1217 10:48:09.866762 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.866769 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:09.866776 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:09.866786 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:09.929554 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:09.929576 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:09.959017 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:09.959032 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:10.017246 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:10.017265 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:10.036170 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:10.036188 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:10.112138 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:10.102458   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.103256   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.105102   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.105527   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.107946   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:10.102458   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.103256   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.105102   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.105527   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.107946   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:12.612434 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:12.622568 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:12.622628 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:12.650041 2974151 cri.go:89] found id: ""
	I1217 10:48:12.650061 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.650069 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:12.650074 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:12.650134 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:12.674422 2974151 cri.go:89] found id: ""
	I1217 10:48:12.674437 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.674444 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:12.674450 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:12.674509 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:12.703294 2974151 cri.go:89] found id: ""
	I1217 10:48:12.703308 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.703315 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:12.703320 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:12.703378 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:12.727986 2974151 cri.go:89] found id: ""
	I1217 10:48:12.728006 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.728013 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:12.728019 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:12.728078 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:12.753787 2974151 cri.go:89] found id: ""
	I1217 10:48:12.753800 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.753807 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:12.753812 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:12.753869 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:12.779807 2974151 cri.go:89] found id: ""
	I1217 10:48:12.779831 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.779838 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:12.779844 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:12.779904 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:12.806196 2974151 cri.go:89] found id: ""
	I1217 10:48:12.806211 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.806219 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:12.806227 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:12.806237 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:12.862792 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:12.862812 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:12.879906 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:12.879923 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:12.944306 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:12.935386   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.935978   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.937685   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.938348   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.940016   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:12.935386   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.935978   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.937685   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.938348   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.940016   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:12.944316 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:12.944327 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:13.006787 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:13.006812 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:15.546753 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:15.557080 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:15.557147 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:15.582296 2974151 cri.go:89] found id: ""
	I1217 10:48:15.582309 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.582316 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:15.582321 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:15.582378 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:15.609992 2974151 cri.go:89] found id: ""
	I1217 10:48:15.610006 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.610013 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:15.610018 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:15.610075 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:15.635702 2974151 cri.go:89] found id: ""
	I1217 10:48:15.635716 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.635723 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:15.635728 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:15.635788 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:15.661568 2974151 cri.go:89] found id: ""
	I1217 10:48:15.661582 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.661589 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:15.661595 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:15.661652 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:15.691028 2974151 cri.go:89] found id: ""
	I1217 10:48:15.691042 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.691049 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:15.691056 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:15.691114 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:15.715986 2974151 cri.go:89] found id: ""
	I1217 10:48:15.716009 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.716018 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:15.716023 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:15.716088 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:15.742377 2974151 cri.go:89] found id: ""
	I1217 10:48:15.742391 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.742398 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:15.742406 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:15.742417 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:15.759230 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:15.759248 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:15.824478 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:15.816058   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.816539   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.818127   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.818799   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.820350   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:15.816058   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.816539   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.818127   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.818799   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.820350   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:15.824490 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:15.824502 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:15.892784 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:15.892804 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:15.921547 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:15.921562 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:18.478009 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:18.488179 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:18.488242 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:18.511813 2974151 cri.go:89] found id: ""
	I1217 10:48:18.511827 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.511843 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:18.511850 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:18.511929 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:18.535876 2974151 cri.go:89] found id: ""
	I1217 10:48:18.535890 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.535897 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:18.535902 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:18.535957 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:18.560498 2974151 cri.go:89] found id: ""
	I1217 10:48:18.560512 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.560521 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:18.560526 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:18.560588 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:18.585005 2974151 cri.go:89] found id: ""
	I1217 10:48:18.585018 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.585025 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:18.585030 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:18.585087 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:18.609132 2974151 cri.go:89] found id: ""
	I1217 10:48:18.609146 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.609153 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:18.609158 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:18.609215 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:18.640158 2974151 cri.go:89] found id: ""
	I1217 10:48:18.640172 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.640187 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:18.640194 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:18.640266 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:18.669845 2974151 cri.go:89] found id: ""
	I1217 10:48:18.669860 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.669867 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:18.669874 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:18.669884 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:18.726133 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:18.726154 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:18.743323 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:18.743341 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:18.807202 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:18.798544   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.799121   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.800756   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.801823   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.803369   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:18.798544   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.799121   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.800756   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.801823   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.803369   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:18.807212 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:18.807222 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:18.869437 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:18.869456 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:21.398466 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:21.408899 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:21.408973 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:21.433836 2974151 cri.go:89] found id: ""
	I1217 10:48:21.433851 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.433858 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:21.433863 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:21.433925 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:21.458440 2974151 cri.go:89] found id: ""
	I1217 10:48:21.458455 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.458462 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:21.458473 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:21.458531 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:21.482664 2974151 cri.go:89] found id: ""
	I1217 10:48:21.482678 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.482685 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:21.482690 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:21.482747 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:21.510499 2974151 cri.go:89] found id: ""
	I1217 10:48:21.510513 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.510520 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:21.510525 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:21.510583 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:21.541182 2974151 cri.go:89] found id: ""
	I1217 10:48:21.541196 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.541204 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:21.541210 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:21.541268 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:21.565692 2974151 cri.go:89] found id: ""
	I1217 10:48:21.565705 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.565717 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:21.565723 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:21.565781 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:21.589704 2974151 cri.go:89] found id: ""
	I1217 10:48:21.589718 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.589725 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:21.589733 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:21.589743 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:21.651127 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:21.642467   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.643175   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.644846   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.645439   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.647213   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:21.642467   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.643175   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.644846   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.645439   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.647213   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:21.651137 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:21.651153 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:21.714087 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:21.714110 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:21.743190 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:21.743205 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:21.803426 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:21.803446 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:24.321453 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:24.331883 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:24.331948 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:24.356312 2974151 cri.go:89] found id: ""
	I1217 10:48:24.356327 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.356334 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:24.356340 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:24.356398 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:24.382382 2974151 cri.go:89] found id: ""
	I1217 10:48:24.382395 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.382402 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:24.382407 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:24.382466 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:24.410304 2974151 cri.go:89] found id: ""
	I1217 10:48:24.410318 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.410325 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:24.410330 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:24.410387 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:24.434459 2974151 cri.go:89] found id: ""
	I1217 10:48:24.434474 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.434481 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:24.434486 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:24.434551 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:24.459866 2974151 cri.go:89] found id: ""
	I1217 10:48:24.459881 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.459888 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:24.459893 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:24.459989 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:24.486458 2974151 cri.go:89] found id: ""
	I1217 10:48:24.486471 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.486478 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:24.486484 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:24.486548 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:24.511349 2974151 cri.go:89] found id: ""
	I1217 10:48:24.511363 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.511372 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:24.511379 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:24.511390 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:24.575296 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:24.566670   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.567433   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.569106   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.569660   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.571348   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:24.566670   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.567433   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.569106   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.569660   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.571348   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:24.575314 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:24.575325 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:24.637043 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:24.637063 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:24.665459 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:24.665475 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:24.722699 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:24.722722 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:27.240739 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:27.252359 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:27.252432 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:27.280163 2974151 cri.go:89] found id: ""
	I1217 10:48:27.280177 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.280196 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:27.280201 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:27.280266 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:27.309589 2974151 cri.go:89] found id: ""
	I1217 10:48:27.309603 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.309622 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:27.309627 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:27.309692 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:27.337538 2974151 cri.go:89] found id: ""
	I1217 10:48:27.337552 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.337559 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:27.337564 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:27.337622 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:27.361942 2974151 cri.go:89] found id: ""
	I1217 10:48:27.361957 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.361965 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:27.361970 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:27.362029 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:27.390818 2974151 cri.go:89] found id: ""
	I1217 10:48:27.390832 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.390840 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:27.390845 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:27.390908 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:27.422856 2974151 cri.go:89] found id: ""
	I1217 10:48:27.422871 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.422878 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:27.422883 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:27.422943 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:27.448978 2974151 cri.go:89] found id: ""
	I1217 10:48:27.448992 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.448999 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:27.449007 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:27.449016 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:27.504505 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:27.504523 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:27.521306 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:27.521327 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:27.585173 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:27.576673   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.577398   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.579125   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.579750   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.581292   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:27.576673   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.577398   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.579125   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.579750   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.581292   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:27.585182 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:27.585193 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:27.646817 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:27.646836 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:30.175129 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:30.186313 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:30.186377 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:30.213450 2974151 cri.go:89] found id: ""
	I1217 10:48:30.213464 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.213471 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:30.213476 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:30.213541 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:30.240025 2974151 cri.go:89] found id: ""
	I1217 10:48:30.240039 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.240046 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:30.240051 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:30.240126 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:30.286752 2974151 cri.go:89] found id: ""
	I1217 10:48:30.286766 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.286774 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:30.286779 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:30.286858 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:30.317210 2974151 cri.go:89] found id: ""
	I1217 10:48:30.317232 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.317240 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:30.317245 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:30.317305 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:30.345461 2974151 cri.go:89] found id: ""
	I1217 10:48:30.345475 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.345482 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:30.345487 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:30.345546 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:30.375558 2974151 cri.go:89] found id: ""
	I1217 10:48:30.375576 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.375590 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:30.375595 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:30.375655 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:30.401652 2974151 cri.go:89] found id: ""
	I1217 10:48:30.401668 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.401675 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:30.401683 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:30.401693 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:30.462370 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:30.462393 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:30.480350 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:30.480366 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:30.545595 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:30.536885   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.537607   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.539274   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.539750   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.541271   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:30.536885   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.537607   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.539274   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.539750   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.541271   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:30.545607 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:30.545619 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:30.609333 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:30.609353 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:33.138648 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:33.149215 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:33.149282 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:33.174735 2974151 cri.go:89] found id: ""
	I1217 10:48:33.174755 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.174764 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:33.174769 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:33.174832 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:33.200480 2974151 cri.go:89] found id: ""
	I1217 10:48:33.200495 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.200502 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:33.200507 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:33.200567 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:33.230102 2974151 cri.go:89] found id: ""
	I1217 10:48:33.230117 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.230124 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:33.230129 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:33.230186 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:33.273250 2974151 cri.go:89] found id: ""
	I1217 10:48:33.273264 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.273271 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:33.273278 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:33.273336 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:33.304262 2974151 cri.go:89] found id: ""
	I1217 10:48:33.304276 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.304293 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:33.304299 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:33.304359 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:33.332160 2974151 cri.go:89] found id: ""
	I1217 10:48:33.332174 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.332181 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:33.332186 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:33.332247 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:33.357270 2974151 cri.go:89] found id: ""
	I1217 10:48:33.357284 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.357291 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:33.357299 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:33.357308 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:33.420730 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:33.420751 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:33.448992 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:33.449007 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:33.504960 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:33.504979 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:33.521896 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:33.521913 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:33.584222 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:33.575275   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.576061   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.577717   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.578259   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.580086   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:33.575275   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.576061   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.577717   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.578259   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.580086   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:36.084525 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:36.095613 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:36.095678 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:36.121922 2974151 cri.go:89] found id: ""
	I1217 10:48:36.121936 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.121944 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:36.121950 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:36.122009 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:36.150594 2974151 cri.go:89] found id: ""
	I1217 10:48:36.150608 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.150616 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:36.150621 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:36.150682 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:36.179197 2974151 cri.go:89] found id: ""
	I1217 10:48:36.179210 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.179218 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:36.179223 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:36.179283 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:36.203527 2974151 cri.go:89] found id: ""
	I1217 10:48:36.203541 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.203548 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:36.203553 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:36.203620 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:36.228332 2974151 cri.go:89] found id: ""
	I1217 10:48:36.228345 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.228352 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:36.228358 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:36.228456 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:36.262749 2974151 cri.go:89] found id: ""
	I1217 10:48:36.262763 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.262769 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:36.262774 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:36.262834 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:36.300340 2974151 cri.go:89] found id: ""
	I1217 10:48:36.300353 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.300363 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:36.300371 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:36.300380 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:36.358709 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:36.358729 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:36.375631 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:36.375649 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:36.440551 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:36.432145   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.432737   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.434406   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.434949   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.436697   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:36.432145   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.432737   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.434406   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.434949   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.436697   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:36.440560 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:36.440571 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:36.502941 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:36.502960 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:39.031727 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:39.042285 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:39.042350 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:39.068264 2974151 cri.go:89] found id: ""
	I1217 10:48:39.068278 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.068285 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:39.068291 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:39.068352 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:39.091732 2974151 cri.go:89] found id: ""
	I1217 10:48:39.091745 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.091752 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:39.091757 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:39.091815 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:39.118106 2974151 cri.go:89] found id: ""
	I1217 10:48:39.118119 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.118126 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:39.118133 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:39.118189 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:39.146834 2974151 cri.go:89] found id: ""
	I1217 10:48:39.146848 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.146856 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:39.146861 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:39.146919 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:39.175980 2974151 cri.go:89] found id: ""
	I1217 10:48:39.175994 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.176001 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:39.176006 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:39.176069 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:39.201501 2974151 cri.go:89] found id: ""
	I1217 10:48:39.201515 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.201522 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:39.201527 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:39.201582 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:39.226802 2974151 cri.go:89] found id: ""
	I1217 10:48:39.226816 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.226833 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:39.226841 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:39.226852 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:39.283913 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:39.283931 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:39.304511 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:39.304528 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:39.377031 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:39.368579   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.369282   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.370783   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.371295   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.372809   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:39.368579   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.369282   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.370783   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.371295   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.372809   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:39.377044 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:39.377059 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:39.440871 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:39.440891 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:41.970682 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:41.981109 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:41.981168 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:42.014790 2974151 cri.go:89] found id: ""
	I1217 10:48:42.014806 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.014813 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:42.014820 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:42.014890 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:42.044163 2974151 cri.go:89] found id: ""
	I1217 10:48:42.044177 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.044183 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:42.044188 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:42.044247 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:42.074548 2974151 cri.go:89] found id: ""
	I1217 10:48:42.074581 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.074595 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:42.074605 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:42.074707 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:42.108730 2974151 cri.go:89] found id: ""
	I1217 10:48:42.108755 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.108763 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:42.108769 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:42.108838 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:42.140974 2974151 cri.go:89] found id: ""
	I1217 10:48:42.140989 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.140997 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:42.141002 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:42.141075 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:42.185841 2974151 cri.go:89] found id: ""
	I1217 10:48:42.185857 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.185865 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:42.185871 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:42.185940 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:42.227621 2974151 cri.go:89] found id: ""
	I1217 10:48:42.227637 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.227645 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:42.227654 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:42.227664 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:42.293458 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:42.293479 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:42.316925 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:42.316945 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:42.388580 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:42.379787   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.380216   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.381959   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.382335   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.383960   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:42.379787   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.380216   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.381959   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.382335   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.383960   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:42.388600 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:42.388612 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:42.451727 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:42.451749 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:44.984590 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:44.995270 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:44.995356 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:45.061848 2974151 cri.go:89] found id: ""
	I1217 10:48:45.061864 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.061871 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:45.061878 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:45.061944 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:45.120144 2974151 cri.go:89] found id: ""
	I1217 10:48:45.120160 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.120168 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:45.120174 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:45.120245 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:45.160210 2974151 cri.go:89] found id: ""
	I1217 10:48:45.160226 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.160235 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:45.160240 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:45.160314 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:45.221796 2974151 cri.go:89] found id: ""
	I1217 10:48:45.221829 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.221858 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:45.221880 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:45.222024 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:45.304676 2974151 cri.go:89] found id: ""
	I1217 10:48:45.304703 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.304711 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:45.304717 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:45.304788 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:45.337767 2974151 cri.go:89] found id: ""
	I1217 10:48:45.337790 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.337798 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:45.337804 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:45.337871 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:45.373372 2974151 cri.go:89] found id: ""
	I1217 10:48:45.373387 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.373394 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:45.373402 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:45.373412 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:45.433269 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:45.433288 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:45.450287 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:45.450304 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:45.517643 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:45.508647   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.509270   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.511035   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.511639   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.513218   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:45.508647   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.509270   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.511035   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.511639   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.513218   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:45.517653 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:45.517665 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:45.581750 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:45.581771 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:48.117070 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:48.128197 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:48.128258 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:48.153434 2974151 cri.go:89] found id: ""
	I1217 10:48:48.153449 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.153455 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:48.153461 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:48.153520 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:48.178677 2974151 cri.go:89] found id: ""
	I1217 10:48:48.178691 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.178698 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:48.178703 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:48.178766 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:48.206864 2974151 cri.go:89] found id: ""
	I1217 10:48:48.206879 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.206886 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:48.206891 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:48.206957 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:48.231924 2974151 cri.go:89] found id: ""
	I1217 10:48:48.231938 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.231945 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:48.231950 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:48.232008 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:48.274705 2974151 cri.go:89] found id: ""
	I1217 10:48:48.274718 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.274726 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:48.274731 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:48.274790 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:48.303863 2974151 cri.go:89] found id: ""
	I1217 10:48:48.303877 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.303884 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:48.303889 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:48.303950 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:48.328838 2974151 cri.go:89] found id: ""
	I1217 10:48:48.328852 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.328859 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:48.328867 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:48.328878 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:48.389442 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:48.389462 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:48.406684 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:48.406700 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:48.472922 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:48.463986   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.464483   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.466011   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.466510   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.468051   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:48.463986   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.464483   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.466011   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.466510   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.468051   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:48.472932 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:48.472943 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:48.535655 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:48.535674 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:51.069071 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:51.081466 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:51.081531 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:51.111124 2974151 cri.go:89] found id: ""
	I1217 10:48:51.111139 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.111146 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:51.111152 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:51.111218 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:51.143791 2974151 cri.go:89] found id: ""
	I1217 10:48:51.143806 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.143813 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:51.143818 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:51.143881 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:51.169640 2974151 cri.go:89] found id: ""
	I1217 10:48:51.169655 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.169661 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:51.169666 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:51.169726 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:51.195027 2974151 cri.go:89] found id: ""
	I1217 10:48:51.195041 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.195048 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:51.195053 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:51.195115 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:51.219317 2974151 cri.go:89] found id: ""
	I1217 10:48:51.219330 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.219337 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:51.219342 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:51.219401 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:51.246522 2974151 cri.go:89] found id: ""
	I1217 10:48:51.246536 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.246543 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:51.246548 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:51.246606 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:51.277021 2974151 cri.go:89] found id: ""
	I1217 10:48:51.277047 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.277055 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:51.277064 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:51.277074 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:51.345341 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:51.345364 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:51.378677 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:51.378693 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:51.438850 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:51.438869 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:51.455900 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:51.455916 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:51.516892 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:51.508779   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.509483   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.510624   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.511147   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.512798   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:51.508779   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.509483   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.510624   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.511147   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.512798   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:54.017193 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:54.028476 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:54.028544 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:54.056997 2974151 cri.go:89] found id: ""
	I1217 10:48:54.057012 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.057019 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:54.057025 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:54.057086 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:54.083159 2974151 cri.go:89] found id: ""
	I1217 10:48:54.083175 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.083183 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:54.083189 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:54.083251 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:54.109519 2974151 cri.go:89] found id: ""
	I1217 10:48:54.109534 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.109549 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:54.109557 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:54.109624 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:54.134157 2974151 cri.go:89] found id: ""
	I1217 10:48:54.134171 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.134178 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:54.134183 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:54.134239 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:54.162788 2974151 cri.go:89] found id: ""
	I1217 10:48:54.162802 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.162819 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:54.162825 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:54.162894 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:54.189731 2974151 cri.go:89] found id: ""
	I1217 10:48:54.189749 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.189756 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:54.189762 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:54.189850 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:54.214954 2974151 cri.go:89] found id: ""
	I1217 10:48:54.214968 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.214975 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:54.214982 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:54.214992 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:54.232128 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:54.232145 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:54.332775 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:54.323643   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.324176   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.325741   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.326329   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.328065   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:54.323643   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.324176   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.325741   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.326329   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.328065   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:54.332784 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:54.332794 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:54.400873 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:54.400902 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:54.436837 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:54.436855 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:56.995650 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:57.014000 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:57.014068 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:57.039621 2974151 cri.go:89] found id: ""
	I1217 10:48:57.039635 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.039642 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:57.039647 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:57.039706 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:57.063811 2974151 cri.go:89] found id: ""
	I1217 10:48:57.063824 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.063832 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:57.063837 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:57.063901 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:57.089763 2974151 cri.go:89] found id: ""
	I1217 10:48:57.089777 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.089784 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:57.089789 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:57.089849 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:57.119137 2974151 cri.go:89] found id: ""
	I1217 10:48:57.119151 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.119157 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:57.119163 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:57.119222 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:57.145301 2974151 cri.go:89] found id: ""
	I1217 10:48:57.145317 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.145324 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:57.145330 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:57.145390 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:57.169967 2974151 cri.go:89] found id: ""
	I1217 10:48:57.169981 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.169989 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:57.169994 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:57.170055 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:57.199678 2974151 cri.go:89] found id: ""
	I1217 10:48:57.199693 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.199700 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:57.199708 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:57.199718 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:57.259994 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:57.260013 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:57.283244 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:57.283262 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:57.355664 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:57.347248   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.348013   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.349816   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.350323   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.351848   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:57.347248   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.348013   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.349816   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.350323   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.351848   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:57.355675 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:57.355686 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:57.418570 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:57.418593 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:59.953153 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:59.963676 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:59.963736 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:59.989636 2974151 cri.go:89] found id: ""
	I1217 10:48:59.989654 2974151 logs.go:282] 0 containers: []
	W1217 10:48:59.989662 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:59.989667 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:59.989734 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:00.158254 2974151 cri.go:89] found id: ""
	I1217 10:49:00.158276 2974151 logs.go:282] 0 containers: []
	W1217 10:49:00.158284 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:00.158290 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:00.158371 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:00.272664 2974151 cri.go:89] found id: ""
	I1217 10:49:00.272680 2974151 logs.go:282] 0 containers: []
	W1217 10:49:00.272687 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:00.272693 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:00.272790 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:00.329030 2974151 cri.go:89] found id: ""
	I1217 10:49:00.329045 2974151 logs.go:282] 0 containers: []
	W1217 10:49:00.329052 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:00.329058 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:00.329123 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:00.376045 2974151 cri.go:89] found id: ""
	I1217 10:49:00.376060 2974151 logs.go:282] 0 containers: []
	W1217 10:49:00.376068 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:00.376074 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:00.376141 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:00.406187 2974151 cri.go:89] found id: ""
	I1217 10:49:00.406202 2974151 logs.go:282] 0 containers: []
	W1217 10:49:00.406210 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:00.406216 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:00.406281 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:00.436523 2974151 cri.go:89] found id: ""
	I1217 10:49:00.436538 2974151 logs.go:282] 0 containers: []
	W1217 10:49:00.436546 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:00.436554 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:00.436575 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:00.504375 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:00.495726   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.496591   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.498206   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.498541   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.500005   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:00.495726   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.496591   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.498206   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.498541   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.500005   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:00.504450 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:00.504460 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:00.568543 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:00.568563 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:00.600756 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:00.600773 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:00.662114 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:00.662131 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:03.181138 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:03.191733 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:03.191796 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:03.220693 2974151 cri.go:89] found id: ""
	I1217 10:49:03.220707 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.220714 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:03.220719 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:03.220775 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:03.245346 2974151 cri.go:89] found id: ""
	I1217 10:49:03.245359 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.245366 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:03.245371 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:03.245434 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:03.283019 2974151 cri.go:89] found id: ""
	I1217 10:49:03.283034 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.283042 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:03.283072 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:03.283134 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:03.312584 2974151 cri.go:89] found id: ""
	I1217 10:49:03.312599 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.312605 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:03.312611 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:03.312670 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:03.337325 2974151 cri.go:89] found id: ""
	I1217 10:49:03.337340 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.337347 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:03.337352 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:03.337421 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:03.363072 2974151 cri.go:89] found id: ""
	I1217 10:49:03.363086 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.363093 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:03.363099 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:03.363156 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:03.388307 2974151 cri.go:89] found id: ""
	I1217 10:49:03.388321 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.388328 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:03.388336 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:03.388346 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:03.450591 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:03.450611 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:03.479831 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:03.479848 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:03.538921 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:03.538940 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:03.557193 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:03.557210 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:03.629818 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:03.620815   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.621960   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.622408   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.623908   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.624403   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:03.620815   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.621960   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.622408   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.623908   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.624403   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:06.130079 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:06.140562 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:06.140625 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:06.176078 2974151 cri.go:89] found id: ""
	I1217 10:49:06.176092 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.176100 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:06.176106 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:06.176165 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:06.201648 2974151 cri.go:89] found id: ""
	I1217 10:49:06.201669 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.201678 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:06.201683 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:06.201741 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:06.225531 2974151 cri.go:89] found id: ""
	I1217 10:49:06.225545 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.225552 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:06.225557 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:06.225615 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:06.252027 2974151 cri.go:89] found id: ""
	I1217 10:49:06.252042 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.252049 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:06.252056 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:06.252118 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:06.280340 2974151 cri.go:89] found id: ""
	I1217 10:49:06.280353 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.280361 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:06.280366 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:06.280449 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:06.313759 2974151 cri.go:89] found id: ""
	I1217 10:49:06.313773 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.313781 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:06.313786 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:06.313846 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:06.338616 2974151 cri.go:89] found id: ""
	I1217 10:49:06.338630 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.338638 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:06.338645 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:06.338655 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:06.394759 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:06.394784 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:06.412192 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:06.412208 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:06.475020 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:06.466865   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.467591   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.469274   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.469719   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.471184   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:06.466865   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.467591   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.469274   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.469719   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.471184   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:06.475030 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:06.475039 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:06.537503 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:06.537522 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:09.067381 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:09.078169 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:09.078242 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:09.102188 2974151 cri.go:89] found id: ""
	I1217 10:49:09.102202 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.102210 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:09.102215 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:09.102276 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:09.127428 2974151 cri.go:89] found id: ""
	I1217 10:49:09.127443 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.127457 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:09.127462 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:09.127523 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:09.155928 2974151 cri.go:89] found id: ""
	I1217 10:49:09.155943 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.155951 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:09.155956 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:09.156013 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:09.180962 2974151 cri.go:89] found id: ""
	I1217 10:49:09.180976 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.180983 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:09.180988 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:09.181047 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:09.206446 2974151 cri.go:89] found id: ""
	I1217 10:49:09.206459 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.206466 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:09.206471 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:09.206527 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:09.234163 2974151 cri.go:89] found id: ""
	I1217 10:49:09.234177 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.234184 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:09.234191 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:09.234248 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:09.266062 2974151 cri.go:89] found id: ""
	I1217 10:49:09.266076 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.266083 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:09.266091 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:09.266100 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:09.331047 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:09.331068 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:09.348066 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:09.348082 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:09.416466 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:09.408138   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.408821   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.410542   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.410884   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.412400   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:09.408138   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.408821   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.410542   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.410884   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.412400   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:09.416475 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:09.416488 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:09.477634 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:09.477656 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:12.006559 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:12.017999 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:12.018064 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:12.043667 2974151 cri.go:89] found id: ""
	I1217 10:49:12.043681 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.043689 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:12.043694 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:12.043755 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:12.067975 2974151 cri.go:89] found id: ""
	I1217 10:49:12.068000 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.068008 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:12.068013 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:12.068082 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:12.093913 2974151 cri.go:89] found id: ""
	I1217 10:49:12.093936 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.093944 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:12.093950 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:12.094011 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:12.123009 2974151 cri.go:89] found id: ""
	I1217 10:49:12.123022 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.123029 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:12.123046 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:12.123121 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:12.152263 2974151 cri.go:89] found id: ""
	I1217 10:49:12.152277 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.152284 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:12.152299 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:12.152357 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:12.178500 2974151 cri.go:89] found id: ""
	I1217 10:49:12.178514 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.178521 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:12.178527 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:12.178601 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:12.203660 2974151 cri.go:89] found id: ""
	I1217 10:49:12.203674 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.203692 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:12.203700 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:12.203711 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:12.261019 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:12.261039 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:12.279774 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:12.279790 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:12.350172 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:12.342156   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.342650   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.344118   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.344659   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.346217   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:12.342156   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.342650   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.344118   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.344659   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.346217   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:12.350182 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:12.350192 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:12.412715 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:12.412734 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:14.942372 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:14.953073 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:14.953133 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:14.986889 2974151 cri.go:89] found id: ""
	I1217 10:49:14.986903 2974151 logs.go:282] 0 containers: []
	W1217 10:49:14.986910 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:14.986916 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:14.987012 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:15.024956 2974151 cri.go:89] found id: ""
	I1217 10:49:15.024972 2974151 logs.go:282] 0 containers: []
	W1217 10:49:15.024980 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:15.024986 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:15.025062 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:15.055135 2974151 cri.go:89] found id: ""
	I1217 10:49:15.055159 2974151 logs.go:282] 0 containers: []
	W1217 10:49:15.055170 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:15.055175 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:15.055244 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:15.083268 2974151 cri.go:89] found id: ""
	I1217 10:49:15.083283 2974151 logs.go:282] 0 containers: []
	W1217 10:49:15.083310 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:15.083316 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:15.083386 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:15.110734 2974151 cri.go:89] found id: ""
	I1217 10:49:15.110750 2974151 logs.go:282] 0 containers: []
	W1217 10:49:15.110757 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:15.110764 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:15.110825 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:15.140854 2974151 cri.go:89] found id: ""
	I1217 10:49:15.140869 2974151 logs.go:282] 0 containers: []
	W1217 10:49:15.140876 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:15.140881 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:15.140981 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:15.167259 2974151 cri.go:89] found id: ""
	I1217 10:49:15.167273 2974151 logs.go:282] 0 containers: []
	W1217 10:49:15.167280 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:15.167288 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:15.167298 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:15.224081 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:15.224100 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:15.241661 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:15.241679 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:15.322485 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:15.313320   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.313943   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.316017   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.316658   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.318128   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:15.313320   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.313943   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.316017   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.316658   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.318128   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:15.322495 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:15.322517 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:15.385975 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:15.385996 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:17.915565 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:17.925558 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:17.925619 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:17.950881 2974151 cri.go:89] found id: ""
	I1217 10:49:17.950895 2974151 logs.go:282] 0 containers: []
	W1217 10:49:17.950902 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:17.950907 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:17.950964 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:17.975955 2974151 cri.go:89] found id: ""
	I1217 10:49:17.975969 2974151 logs.go:282] 0 containers: []
	W1217 10:49:17.975975 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:17.975980 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:17.976039 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:18.004484 2974151 cri.go:89] found id: ""
	I1217 10:49:18.004503 2974151 logs.go:282] 0 containers: []
	W1217 10:49:18.004512 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:18.004517 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:18.004597 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:18.031679 2974151 cri.go:89] found id: ""
	I1217 10:49:18.031694 2974151 logs.go:282] 0 containers: []
	W1217 10:49:18.031702 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:18.031708 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:18.031775 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:18.059398 2974151 cri.go:89] found id: ""
	I1217 10:49:18.059412 2974151 logs.go:282] 0 containers: []
	W1217 10:49:18.059436 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:18.059443 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:18.059504 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:18.085330 2974151 cri.go:89] found id: ""
	I1217 10:49:18.085344 2974151 logs.go:282] 0 containers: []
	W1217 10:49:18.085352 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:18.085357 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:18.085420 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:18.114569 2974151 cri.go:89] found id: ""
	I1217 10:49:18.114585 2974151 logs.go:282] 0 containers: []
	W1217 10:49:18.114592 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:18.114600 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:18.114611 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:18.178110 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:18.169772   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.170633   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.172208   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.172731   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.174231   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:18.169772   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.170633   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.172208   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.172731   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.174231   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:18.178122 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:18.178132 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:18.241410 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:18.241434 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:18.273882 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:18.273898 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:18.334306 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:18.334324 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:20.852121 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:20.862188 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:20.862248 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:20.886819 2974151 cri.go:89] found id: ""
	I1217 10:49:20.886834 2974151 logs.go:282] 0 containers: []
	W1217 10:49:20.886850 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:20.886857 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:20.886930 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:20.913071 2974151 cri.go:89] found id: ""
	I1217 10:49:20.913086 2974151 logs.go:282] 0 containers: []
	W1217 10:49:20.913093 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:20.913098 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:20.913157 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:20.937301 2974151 cri.go:89] found id: ""
	I1217 10:49:20.937315 2974151 logs.go:282] 0 containers: []
	W1217 10:49:20.937322 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:20.937327 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:20.937386 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:20.966247 2974151 cri.go:89] found id: ""
	I1217 10:49:20.966260 2974151 logs.go:282] 0 containers: []
	W1217 10:49:20.966267 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:20.966272 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:20.966328 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:20.991713 2974151 cri.go:89] found id: ""
	I1217 10:49:20.991727 2974151 logs.go:282] 0 containers: []
	W1217 10:49:20.991734 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:20.991739 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:20.991796 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:21.017813 2974151 cri.go:89] found id: ""
	I1217 10:49:21.017828 2974151 logs.go:282] 0 containers: []
	W1217 10:49:21.017835 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:21.017841 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:21.017901 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:21.047576 2974151 cri.go:89] found id: ""
	I1217 10:49:21.047590 2974151 logs.go:282] 0 containers: []
	W1217 10:49:21.047598 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:21.047605 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:21.047615 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:21.109681 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:21.109707 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:21.127095 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:21.127114 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:21.192482 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:21.184199   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.184777   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.186485   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.186953   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.188551   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:21.184199   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.184777   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.186485   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.186953   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.188551   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:21.192493 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:21.192504 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:21.256363 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:21.256383 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:23.824987 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:23.835117 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:23.835179 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:23.860953 2974151 cri.go:89] found id: ""
	I1217 10:49:23.860966 2974151 logs.go:282] 0 containers: []
	W1217 10:49:23.860973 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:23.860979 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:23.861036 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:23.894776 2974151 cri.go:89] found id: ""
	I1217 10:49:23.894790 2974151 logs.go:282] 0 containers: []
	W1217 10:49:23.894797 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:23.894802 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:23.894863 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:23.923645 2974151 cri.go:89] found id: ""
	I1217 10:49:23.923660 2974151 logs.go:282] 0 containers: []
	W1217 10:49:23.923667 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:23.923678 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:23.923735 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:23.950354 2974151 cri.go:89] found id: ""
	I1217 10:49:23.950368 2974151 logs.go:282] 0 containers: []
	W1217 10:49:23.950374 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:23.950380 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:23.950437 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:23.974645 2974151 cri.go:89] found id: ""
	I1217 10:49:23.974659 2974151 logs.go:282] 0 containers: []
	W1217 10:49:23.974666 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:23.974671 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:23.974732 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:24.000121 2974151 cri.go:89] found id: ""
	I1217 10:49:24.000149 2974151 logs.go:282] 0 containers: []
	W1217 10:49:24.000157 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:24.000163 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:24.000242 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:24.034475 2974151 cri.go:89] found id: ""
	I1217 10:49:24.034489 2974151 logs.go:282] 0 containers: []
	W1217 10:49:24.034497 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:24.034505 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:24.034514 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:24.099963 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:24.099984 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:24.136430 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:24.136447 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:24.192589 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:24.192651 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:24.209690 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:24.209707 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:24.292778 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:24.284539   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.285387   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.287069   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.287380   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.288843   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:24.284539   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.285387   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.287069   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.287380   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.288843   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:26.793038 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:26.803569 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:26.803630 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:26.829202 2974151 cri.go:89] found id: ""
	I1217 10:49:26.829215 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.829222 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:26.829227 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:26.829285 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:26.855339 2974151 cri.go:89] found id: ""
	I1217 10:49:26.855353 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.855359 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:26.855365 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:26.855434 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:26.882145 2974151 cri.go:89] found id: ""
	I1217 10:49:26.882160 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.882168 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:26.882174 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:26.882231 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:26.906912 2974151 cri.go:89] found id: ""
	I1217 10:49:26.906925 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.906932 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:26.906937 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:26.906994 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:26.931691 2974151 cri.go:89] found id: ""
	I1217 10:49:26.931714 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.931722 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:26.931732 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:26.931798 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:26.957483 2974151 cri.go:89] found id: ""
	I1217 10:49:26.957497 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.957504 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:26.957510 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:26.957570 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:26.981546 2974151 cri.go:89] found id: ""
	I1217 10:49:26.981560 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.981567 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:26.981574 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:26.981584 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:27.038884 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:27.038905 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:27.059063 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:27.059079 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:27.122721 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:27.114006   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.114575   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.116274   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.117079   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.118797   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:27.114006   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.114575   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.116274   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.117079   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.118797   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:27.122731 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:27.122741 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:27.188207 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:27.188227 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:29.720397 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:29.731016 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:29.731089 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:29.759816 2974151 cri.go:89] found id: ""
	I1217 10:49:29.759836 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.759843 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:29.759848 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:29.759909 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:29.784725 2974151 cri.go:89] found id: ""
	I1217 10:49:29.784739 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.784747 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:29.784752 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:29.784813 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:29.810710 2974151 cri.go:89] found id: ""
	I1217 10:49:29.810724 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.810731 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:29.810736 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:29.810796 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:29.835166 2974151 cri.go:89] found id: ""
	I1217 10:49:29.835180 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.835187 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:29.835196 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:29.835255 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:29.862724 2974151 cri.go:89] found id: ""
	I1217 10:49:29.862738 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.862745 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:29.862750 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:29.862814 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:29.887572 2974151 cri.go:89] found id: ""
	I1217 10:49:29.887590 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.887597 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:29.887608 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:29.887676 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:29.911679 2974151 cri.go:89] found id: ""
	I1217 10:49:29.911693 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.911700 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:29.911708 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:29.911717 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:29.974573 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:29.974595 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:30.028175 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:30.028195 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:30.102876 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:30.102898 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:30.120802 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:30.120826 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:30.191763 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:30.183313   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.184024   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.185583   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.186151   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.187552   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:30.183313   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.184024   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.185583   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.186151   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.187552   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:32.692593 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:32.703024 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:32.703087 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:32.733277 2974151 cri.go:89] found id: ""
	I1217 10:49:32.733302 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.733310 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:32.733317 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:32.733384 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:32.763219 2974151 cri.go:89] found id: ""
	I1217 10:49:32.763234 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.763241 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:32.763246 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:32.763304 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:32.793128 2974151 cri.go:89] found id: ""
	I1217 10:49:32.793143 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.793150 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:32.793155 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:32.793213 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:32.824178 2974151 cri.go:89] found id: ""
	I1217 10:49:32.824194 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.824201 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:32.824206 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:32.824271 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:32.854145 2974151 cri.go:89] found id: ""
	I1217 10:49:32.854170 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.854178 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:32.854183 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:32.854251 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:32.879767 2974151 cri.go:89] found id: ""
	I1217 10:49:32.879797 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.879804 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:32.879809 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:32.879899 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:32.909819 2974151 cri.go:89] found id: ""
	I1217 10:49:32.909833 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.909842 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:32.909849 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:32.909859 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:32.938841 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:32.938857 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:32.995133 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:32.995156 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:33.014953 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:33.014974 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:33.085045 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:33.075667   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.076471   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.078226   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.078820   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.080383   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:33.075667   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.076471   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.078226   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.078820   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.080383   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:33.085054 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:33.085065 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:35.651037 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:35.661187 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:35.661246 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:35.687255 2974151 cri.go:89] found id: ""
	I1217 10:49:35.687270 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.687277 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:35.687282 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:35.687340 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:35.713953 2974151 cri.go:89] found id: ""
	I1217 10:49:35.713967 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.713974 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:35.713980 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:35.714040 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:35.742852 2974151 cri.go:89] found id: ""
	I1217 10:49:35.742866 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.742874 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:35.742879 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:35.742937 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:35.768219 2974151 cri.go:89] found id: ""
	I1217 10:49:35.768233 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.768240 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:35.768246 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:35.768314 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:35.792498 2974151 cri.go:89] found id: ""
	I1217 10:49:35.792512 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.792519 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:35.792524 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:35.792583 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:35.818063 2974151 cri.go:89] found id: ""
	I1217 10:49:35.818077 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.818084 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:35.818089 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:35.818147 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:35.843090 2974151 cri.go:89] found id: ""
	I1217 10:49:35.843105 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.843111 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:35.843119 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:35.843129 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:35.899655 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:35.899673 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:35.916834 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:35.916850 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:35.982052 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:35.973406   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.974102   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.975751   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.976284   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.977956   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:35.973406   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.974102   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.975751   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.976284   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.977956   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:35.982062 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:35.982075 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:36.049729 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:36.049750 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:38.582447 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:38.592471 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:38.592528 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:38.617757 2974151 cri.go:89] found id: ""
	I1217 10:49:38.617772 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.617779 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:38.617786 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:38.617845 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:38.647228 2974151 cri.go:89] found id: ""
	I1217 10:49:38.647242 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.647249 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:38.647254 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:38.647312 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:38.672309 2974151 cri.go:89] found id: ""
	I1217 10:49:38.672324 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.672331 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:38.672336 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:38.672395 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:38.699575 2974151 cri.go:89] found id: ""
	I1217 10:49:38.699590 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.699597 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:38.699603 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:38.699660 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:38.729276 2974151 cri.go:89] found id: ""
	I1217 10:49:38.729290 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.729297 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:38.729303 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:38.729361 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:38.757110 2974151 cri.go:89] found id: ""
	I1217 10:49:38.757124 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.757131 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:38.757137 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:38.757197 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:38.783523 2974151 cri.go:89] found id: ""
	I1217 10:49:38.783537 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.783544 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:38.783551 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:38.783562 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:38.854691 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:38.846060   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.846723   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.848354   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.849037   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.850802   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:38.846060   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.846723   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.848354   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.849037   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.850802   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:38.854701 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:38.854713 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:38.918821 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:38.918843 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:38.947201 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:38.947217 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:39.004566 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:39.004587 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:41.522977 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:41.536227 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:41.536288 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:41.566436 2974151 cri.go:89] found id: ""
	I1217 10:49:41.566451 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.566458 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:41.566466 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:41.566527 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:41.599863 2974151 cri.go:89] found id: ""
	I1217 10:49:41.599879 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.599886 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:41.599892 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:41.599956 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:41.631187 2974151 cri.go:89] found id: ""
	I1217 10:49:41.631202 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.631209 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:41.631216 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:41.631274 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:41.658402 2974151 cri.go:89] found id: ""
	I1217 10:49:41.658416 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.658423 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:41.658428 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:41.658487 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:41.686724 2974151 cri.go:89] found id: ""
	I1217 10:49:41.686738 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.686745 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:41.686751 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:41.686809 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:41.721194 2974151 cri.go:89] found id: ""
	I1217 10:49:41.721208 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.721215 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:41.721220 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:41.721279 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:41.750295 2974151 cri.go:89] found id: ""
	I1217 10:49:41.750309 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.750316 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:41.750323 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:41.750334 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:41.779389 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:41.779406 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:41.837692 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:41.837715 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:41.854830 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:41.854847 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:41.919451 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:41.911491   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.912035   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.913552   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.914095   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.915570   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:41.911491   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.912035   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.913552   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.914095   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.915570   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:41.919461 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:41.919470 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:44.482271 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:44.492656 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:44.492720 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:44.530744 2974151 cri.go:89] found id: ""
	I1217 10:49:44.530758 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.530765 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:44.530770 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:44.530831 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:44.556602 2974151 cri.go:89] found id: ""
	I1217 10:49:44.556616 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.556624 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:44.556629 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:44.556687 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:44.582820 2974151 cri.go:89] found id: ""
	I1217 10:49:44.582835 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.582842 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:44.582847 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:44.582906 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:44.607152 2974151 cri.go:89] found id: ""
	I1217 10:49:44.607166 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.607173 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:44.607184 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:44.607244 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:44.634565 2974151 cri.go:89] found id: ""
	I1217 10:49:44.634579 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.634587 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:44.634592 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:44.634662 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:44.661979 2974151 cri.go:89] found id: ""
	I1217 10:49:44.661993 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.662000 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:44.662005 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:44.662066 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:44.686675 2974151 cri.go:89] found id: ""
	I1217 10:49:44.686697 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.686705 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:44.686713 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:44.686722 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:44.743011 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:44.743033 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:44.759816 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:44.759833 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:44.824819 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:44.816544   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.817205   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.818745   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.819310   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.820870   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:44.816544   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.817205   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.818745   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.819310   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.820870   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:44.824830 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:44.824841 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:44.890788 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:44.890807 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:47.418865 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:47.429392 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:47.429467 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:47.454629 2974151 cri.go:89] found id: ""
	I1217 10:49:47.454643 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.454650 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:47.454655 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:47.454766 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:47.480876 2974151 cri.go:89] found id: ""
	I1217 10:49:47.480890 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.480897 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:47.480902 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:47.480970 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:47.512027 2974151 cri.go:89] found id: ""
	I1217 10:49:47.512041 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.512054 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:47.512060 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:47.512120 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:47.539586 2974151 cri.go:89] found id: ""
	I1217 10:49:47.539600 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.539608 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:47.539613 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:47.539671 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:47.566423 2974151 cri.go:89] found id: ""
	I1217 10:49:47.566437 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.566444 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:47.566450 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:47.566507 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:47.592329 2974151 cri.go:89] found id: ""
	I1217 10:49:47.592343 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.592350 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:47.592355 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:47.592442 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:47.617999 2974151 cri.go:89] found id: ""
	I1217 10:49:47.618013 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.618020 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:47.618028 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:47.618037 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:47.678218 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:47.678240 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:47.695642 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:47.695659 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:47.762123 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:47.753095   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.754063   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.755748   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.756187   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.757812   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:47.753095   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.754063   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.755748   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.756187   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.757812   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:47.762133 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:47.762146 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:47.828387 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:47.828408 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:50.363629 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:50.373970 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:50.374026 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:50.398664 2974151 cri.go:89] found id: ""
	I1217 10:49:50.398678 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.398685 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:50.398690 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:50.398749 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:50.424119 2974151 cri.go:89] found id: ""
	I1217 10:49:50.424132 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.424139 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:50.424144 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:50.424203 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:50.450501 2974151 cri.go:89] found id: ""
	I1217 10:49:50.450516 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.450523 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:50.450529 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:50.450591 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:50.479279 2974151 cri.go:89] found id: ""
	I1217 10:49:50.479330 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.479338 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:50.479344 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:50.479402 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:50.514044 2974151 cri.go:89] found id: ""
	I1217 10:49:50.514058 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.514065 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:50.514070 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:50.514147 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:50.550857 2974151 cri.go:89] found id: ""
	I1217 10:49:50.550871 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.550878 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:50.550883 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:50.550943 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:50.586702 2974151 cri.go:89] found id: ""
	I1217 10:49:50.586716 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.586724 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:50.586731 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:50.586740 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:50.649317 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:50.649338 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:50.681689 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:50.681706 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:50.739069 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:50.739092 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:50.756760 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:50.756777 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:50.826240 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:50.816693   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.817339   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.819115   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.819743   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.821406   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:50.816693   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.817339   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.819115   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.819743   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.821406   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:53.327009 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:53.338042 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:53.338105 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:53.364395 2974151 cri.go:89] found id: ""
	I1217 10:49:53.364409 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.364437 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:53.364443 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:53.364504 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:53.391405 2974151 cri.go:89] found id: ""
	I1217 10:49:53.391418 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.391425 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:53.391435 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:53.391495 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:53.415894 2974151 cri.go:89] found id: ""
	I1217 10:49:53.415909 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.415916 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:53.415921 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:53.415987 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:53.441489 2974151 cri.go:89] found id: ""
	I1217 10:49:53.441505 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.441512 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:53.441518 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:53.441577 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:53.470465 2974151 cri.go:89] found id: ""
	I1217 10:49:53.470480 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.470487 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:53.470492 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:53.470580 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:53.496777 2974151 cri.go:89] found id: ""
	I1217 10:49:53.496791 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.496798 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:53.496804 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:53.496862 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:53.522462 2974151 cri.go:89] found id: ""
	I1217 10:49:53.522477 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.522484 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:53.522492 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:53.522503 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:53.587962 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:53.587981 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:53.605021 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:53.605038 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:53.674653 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:53.666595   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.667148   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.668629   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.669055   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.670469   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:53.666595   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.667148   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.668629   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.669055   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.670469   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:53.674671 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:53.674682 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:53.736888 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:53.736908 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:56.264574 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:56.274948 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:56.275019 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:56.306086 2974151 cri.go:89] found id: ""
	I1217 10:49:56.306108 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.306116 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:56.306122 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:56.306189 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:56.331503 2974151 cri.go:89] found id: ""
	I1217 10:49:56.331517 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.331524 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:56.331529 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:56.331588 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:56.357713 2974151 cri.go:89] found id: ""
	I1217 10:49:56.357727 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.357734 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:56.357740 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:56.357804 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:56.386307 2974151 cri.go:89] found id: ""
	I1217 10:49:56.386322 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.386329 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:56.386335 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:56.386392 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:56.411103 2974151 cri.go:89] found id: ""
	I1217 10:49:56.411116 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.411148 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:56.411154 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:56.411210 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:56.438603 2974151 cri.go:89] found id: ""
	I1217 10:49:56.438617 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.438632 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:56.438638 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:56.438700 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:56.463485 2974151 cri.go:89] found id: ""
	I1217 10:49:56.463499 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.463506 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:56.463513 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:56.463526 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:56.480151 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:56.480170 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:56.564122 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:56.555873   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.556612   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.558127   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.558422   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.559904   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:56.555873   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.556612   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.558127   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.558422   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.559904   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:56.564133 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:56.564152 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:56.631606 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:56.631625 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:56.658603 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:56.658621 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:59.216557 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:59.226542 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:59.226605 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:59.250485 2974151 cri.go:89] found id: ""
	I1217 10:49:59.250501 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.250522 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:59.250528 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:59.250597 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:59.275922 2974151 cri.go:89] found id: ""
	I1217 10:49:59.275936 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.275945 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:59.275960 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:59.276021 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:59.305346 2974151 cri.go:89] found id: ""
	I1217 10:49:59.305372 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.305380 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:59.305386 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:59.305454 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:59.329784 2974151 cri.go:89] found id: ""
	I1217 10:49:59.329799 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.329806 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:59.329812 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:59.329870 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:59.353939 2974151 cri.go:89] found id: ""
	I1217 10:49:59.353953 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.353961 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:59.353968 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:59.354030 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:59.379444 2974151 cri.go:89] found id: ""
	I1217 10:49:59.379458 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.379465 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:59.379471 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:59.379535 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:59.404346 2974151 cri.go:89] found id: ""
	I1217 10:49:59.404360 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.404367 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:59.404374 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:59.404385 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:59.421191 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:59.421209 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:59.484153 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:59.476366   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.477052   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.478594   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.478902   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.480341   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:59.476366   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.477052   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.478594   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.478902   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.480341   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:59.484164 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:59.484177 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:59.553474 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:59.553493 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:59.587183 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:59.587199 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:02.144181 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:02.155199 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:02.155292 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:02.188757 2974151 cri.go:89] found id: ""
	I1217 10:50:02.188773 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.188780 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:02.188785 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:02.188851 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:02.219315 2974151 cri.go:89] found id: ""
	I1217 10:50:02.219330 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.219337 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:02.219342 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:02.219406 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:02.244595 2974151 cri.go:89] found id: ""
	I1217 10:50:02.244609 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.244616 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:02.244622 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:02.244684 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:02.270632 2974151 cri.go:89] found id: ""
	I1217 10:50:02.270647 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.270654 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:02.270659 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:02.270718 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:02.296393 2974151 cri.go:89] found id: ""
	I1217 10:50:02.296407 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.296447 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:02.296454 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:02.296521 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:02.326837 2974151 cri.go:89] found id: ""
	I1217 10:50:02.326851 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.326859 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:02.326868 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:02.326931 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:02.356502 2974151 cri.go:89] found id: ""
	I1217 10:50:02.356517 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.356527 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:02.356536 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:02.356548 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:02.434224 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:02.417603   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.418251   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.426024   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.428283   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.428822   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:02.417603   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.418251   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.426024   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.428283   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.428822   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:02.434234 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:02.434244 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:02.502034 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:02.502055 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:02.541286 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:02.541303 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:02.606116 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:02.606137 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:05.125496 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:05.136157 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:05.136217 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:05.160937 2974151 cri.go:89] found id: ""
	I1217 10:50:05.160952 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.160959 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:05.160964 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:05.161024 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:05.185873 2974151 cri.go:89] found id: ""
	I1217 10:50:05.185887 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.185894 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:05.185900 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:05.185999 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:05.212646 2974151 cri.go:89] found id: ""
	I1217 10:50:05.212676 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.212684 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:05.212690 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:05.212767 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:05.238323 2974151 cri.go:89] found id: ""
	I1217 10:50:05.238340 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.238347 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:05.238353 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:05.238414 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:05.263764 2974151 cri.go:89] found id: ""
	I1217 10:50:05.263779 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.263786 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:05.263792 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:05.263849 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:05.289054 2974151 cri.go:89] found id: ""
	I1217 10:50:05.289069 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.289076 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:05.289081 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:05.289144 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:05.314515 2974151 cri.go:89] found id: ""
	I1217 10:50:05.314530 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.314538 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:05.314546 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:05.314556 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:05.380980 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:05.381002 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:05.414207 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:05.414222 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:05.472281 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:05.472301 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:05.489358 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:05.489375 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:05.571554 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:05.562906   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.563808   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.565527   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.566129   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.567151   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:05.562906   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.563808   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.565527   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.566129   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.567151   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:08.071830 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:08.082387 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:08.082462 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:08.110539 2974151 cri.go:89] found id: ""
	I1217 10:50:08.110553 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.110561 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:08.110566 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:08.110629 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:08.135732 2974151 cri.go:89] found id: ""
	I1217 10:50:08.135746 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.135754 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:08.135760 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:08.135828 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:08.162274 2974151 cri.go:89] found id: ""
	I1217 10:50:08.162289 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.162296 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:08.162302 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:08.162359 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:08.187522 2974151 cri.go:89] found id: ""
	I1217 10:50:08.187536 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.187543 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:08.187549 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:08.187618 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:08.212868 2974151 cri.go:89] found id: ""
	I1217 10:50:08.212883 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.212890 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:08.212896 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:08.212958 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:08.236894 2974151 cri.go:89] found id: ""
	I1217 10:50:08.236908 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.236915 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:08.236921 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:08.236981 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:08.262293 2974151 cri.go:89] found id: ""
	I1217 10:50:08.262308 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.262315 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:08.262322 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:08.262332 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:08.320099 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:08.320118 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:08.337595 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:08.337611 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:08.404535 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:08.395902   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.396655   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.398294   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.398971   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.400705   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:08.395902   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.396655   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.398294   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.398971   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.400705   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:08.404545 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:08.404557 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:08.467318 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:08.467338 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:11.014160 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:11.025076 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:11.025146 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:11.050236 2974151 cri.go:89] found id: ""
	I1217 10:50:11.050252 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.050260 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:11.050265 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:11.050329 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:11.081289 2974151 cri.go:89] found id: ""
	I1217 10:50:11.081311 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.081318 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:11.081324 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:11.081385 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:11.111117 2974151 cri.go:89] found id: ""
	I1217 10:50:11.111134 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.111141 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:11.111146 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:11.111209 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:11.137886 2974151 cri.go:89] found id: ""
	I1217 10:50:11.137900 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.137908 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:11.137913 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:11.137972 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:11.164080 2974151 cri.go:89] found id: ""
	I1217 10:50:11.164096 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.164104 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:11.164119 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:11.164183 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:11.194241 2974151 cri.go:89] found id: ""
	I1217 10:50:11.194256 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.194264 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:11.194269 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:11.194331 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:11.220644 2974151 cri.go:89] found id: ""
	I1217 10:50:11.220659 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.220666 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:11.220673 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:11.220687 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:11.283052 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:11.283070 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:11.310700 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:11.310717 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:11.366749 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:11.366769 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:11.383957 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:11.383975 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:11.451001 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:11.442629   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.443048   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.444733   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.445416   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.447157   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:11.442629   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.443048   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.444733   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.445416   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.447157   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:13.952741 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:13.962784 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:13.962846 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:13.987247 2974151 cri.go:89] found id: ""
	I1217 10:50:13.987262 2974151 logs.go:282] 0 containers: []
	W1217 10:50:13.987269 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:13.987274 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:13.987340 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:14.012962 2974151 cri.go:89] found id: ""
	I1217 10:50:14.012977 2974151 logs.go:282] 0 containers: []
	W1217 10:50:14.012984 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:14.012990 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:14.013058 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:14.038181 2974151 cri.go:89] found id: ""
	I1217 10:50:14.038195 2974151 logs.go:282] 0 containers: []
	W1217 10:50:14.038203 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:14.038208 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:14.038266 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:14.062700 2974151 cri.go:89] found id: ""
	I1217 10:50:14.062715 2974151 logs.go:282] 0 containers: []
	W1217 10:50:14.062723 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:14.062728 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:14.062785 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:14.093364 2974151 cri.go:89] found id: ""
	I1217 10:50:14.093386 2974151 logs.go:282] 0 containers: []
	W1217 10:50:14.093393 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:14.093399 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:14.093457 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:14.118504 2974151 cri.go:89] found id: ""
	I1217 10:50:14.118519 2974151 logs.go:282] 0 containers: []
	W1217 10:50:14.118525 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:14.118531 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:14.118596 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:14.143182 2974151 cri.go:89] found id: ""
	I1217 10:50:14.143198 2974151 logs.go:282] 0 containers: []
	W1217 10:50:14.143204 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:14.143212 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:14.143223 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:14.201003 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:14.201024 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:14.218136 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:14.218153 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:14.291347 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:14.280379   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.281686   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.285094   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.285633   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.287421   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:14.280379   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.281686   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.285094   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.285633   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.287421   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:14.291358 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:14.291370 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:14.354518 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:14.354541 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:16.888907 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:16.899327 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:16.899396 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:16.924553 2974151 cri.go:89] found id: ""
	I1217 10:50:16.924572 2974151 logs.go:282] 0 containers: []
	W1217 10:50:16.924580 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:16.924586 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:16.924646 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:16.950729 2974151 cri.go:89] found id: ""
	I1217 10:50:16.950743 2974151 logs.go:282] 0 containers: []
	W1217 10:50:16.950750 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:16.950756 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:16.950811 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:16.978167 2974151 cri.go:89] found id: ""
	I1217 10:50:16.978181 2974151 logs.go:282] 0 containers: []
	W1217 10:50:16.978189 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:16.978193 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:16.978254 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:17.005223 2974151 cri.go:89] found id: ""
	I1217 10:50:17.005239 2974151 logs.go:282] 0 containers: []
	W1217 10:50:17.005247 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:17.005253 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:17.005336 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:17.031301 2974151 cri.go:89] found id: ""
	I1217 10:50:17.031315 2974151 logs.go:282] 0 containers: []
	W1217 10:50:17.031323 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:17.031328 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:17.031393 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:17.058782 2974151 cri.go:89] found id: ""
	I1217 10:50:17.058796 2974151 logs.go:282] 0 containers: []
	W1217 10:50:17.058804 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:17.058810 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:17.058869 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:17.084580 2974151 cri.go:89] found id: ""
	I1217 10:50:17.084595 2974151 logs.go:282] 0 containers: []
	W1217 10:50:17.084603 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:17.084611 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:17.084628 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:17.144045 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:17.144067 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:17.161459 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:17.161476 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:17.230344 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:17.221052   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.221467   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.224663   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.225044   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.226301   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:17.221052   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.221467   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.224663   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.225044   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.226301   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:17.230353 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:17.230364 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:17.292978 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:17.292998 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:19.828581 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:19.838853 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:19.838914 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:19.864198 2974151 cri.go:89] found id: ""
	I1217 10:50:19.864213 2974151 logs.go:282] 0 containers: []
	W1217 10:50:19.864220 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:19.864225 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:19.864284 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:19.899721 2974151 cri.go:89] found id: ""
	I1217 10:50:19.899735 2974151 logs.go:282] 0 containers: []
	W1217 10:50:19.899758 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:19.899764 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:19.899837 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:19.928330 2974151 cri.go:89] found id: ""
	I1217 10:50:19.928345 2974151 logs.go:282] 0 containers: []
	W1217 10:50:19.928352 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:19.928356 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:19.928445 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:19.954497 2974151 cri.go:89] found id: ""
	I1217 10:50:19.954514 2974151 logs.go:282] 0 containers: []
	W1217 10:50:19.954538 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:19.954545 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:19.954608 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:19.980091 2974151 cri.go:89] found id: ""
	I1217 10:50:19.980105 2974151 logs.go:282] 0 containers: []
	W1217 10:50:19.980112 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:19.980118 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:19.980184 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:20.010659 2974151 cri.go:89] found id: ""
	I1217 10:50:20.010676 2974151 logs.go:282] 0 containers: []
	W1217 10:50:20.010685 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:20.010691 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:20.010767 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:20.043088 2974151 cri.go:89] found id: ""
	I1217 10:50:20.043104 2974151 logs.go:282] 0 containers: []
	W1217 10:50:20.043113 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:20.043121 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:20.043132 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:20.100529 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:20.100550 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:20.118575 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:20.118591 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:20.187144 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:20.178717   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.179517   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.181042   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.181412   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.182990   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:20.178717   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.179517   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.181042   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.181412   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.182990   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:20.187155 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:20.187167 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:20.249393 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:20.249414 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:22.778795 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:22.790536 2974151 kubeadm.go:602] duration metric: took 4m2.042602584s to restartPrimaryControlPlane
	W1217 10:50:22.790601 2974151 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 10:50:22.790675 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 10:50:23.205315 2974151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 10:50:23.219008 2974151 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 10:50:23.227117 2974151 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 10:50:23.227176 2974151 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 10:50:23.235370 2974151 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 10:50:23.235380 2974151 kubeadm.go:158] found existing configuration files:
	
	I1217 10:50:23.235436 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 10:50:23.243539 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 10:50:23.243597 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 10:50:23.251153 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 10:50:23.259288 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 10:50:23.259364 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 10:50:23.267370 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 10:50:23.275727 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 10:50:23.275787 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 10:50:23.283930 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 10:50:23.292280 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 10:50:23.292340 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 10:50:23.300010 2974151 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 10:50:23.340550 2974151 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 10:50:23.340717 2974151 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 10:50:23.412202 2974151 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 10:50:23.412287 2974151 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 10:50:23.412322 2974151 kubeadm.go:319] OS: Linux
	I1217 10:50:23.412377 2974151 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 10:50:23.412441 2974151 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 10:50:23.412489 2974151 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 10:50:23.412536 2974151 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 10:50:23.412585 2974151 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 10:50:23.412632 2974151 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 10:50:23.412677 2974151 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 10:50:23.412724 2974151 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 10:50:23.412769 2974151 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 10:50:23.486890 2974151 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 10:50:23.486989 2974151 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 10:50:23.487074 2974151 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 10:50:23.492949 2974151 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 10:50:23.496478 2974151 out.go:252]   - Generating certificates and keys ...
	I1217 10:50:23.496568 2974151 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 10:50:23.496637 2974151 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 10:50:23.496718 2974151 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 10:50:23.496782 2974151 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 10:50:23.496856 2974151 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 10:50:23.496912 2974151 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 10:50:23.496979 2974151 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 10:50:23.497043 2974151 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 10:50:23.497122 2974151 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 10:50:23.497199 2974151 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 10:50:23.497239 2974151 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 10:50:23.497303 2974151 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 10:50:23.659882 2974151 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 10:50:23.806390 2974151 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 10:50:23.994170 2974151 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 10:50:24.254389 2974151 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 10:50:24.616203 2974151 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 10:50:24.616885 2974151 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 10:50:24.619452 2974151 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 10:50:24.622875 2974151 out.go:252]   - Booting up control plane ...
	I1217 10:50:24.622979 2974151 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 10:50:24.623060 2974151 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 10:50:24.623134 2974151 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 10:50:24.643299 2974151 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 10:50:24.643404 2974151 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 10:50:24.652837 2974151 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 10:50:24.652937 2974151 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 10:50:24.652975 2974151 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 10:50:24.787245 2974151 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 10:50:24.787354 2974151 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 10:54:24.787078 2974151 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000331472s
	I1217 10:54:24.787103 2974151 kubeadm.go:319] 
	I1217 10:54:24.787156 2974151 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 10:54:24.787187 2974151 kubeadm.go:319] 	- The kubelet is not running
	I1217 10:54:24.787285 2974151 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 10:54:24.787290 2974151 kubeadm.go:319] 
	I1217 10:54:24.787387 2974151 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 10:54:24.787416 2974151 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 10:54:24.787445 2974151 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 10:54:24.787448 2974151 kubeadm.go:319] 
	I1217 10:54:24.791515 2974151 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 10:54:24.791934 2974151 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 10:54:24.792041 2974151 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 10:54:24.792274 2974151 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 10:54:24.792279 2974151 kubeadm.go:319] 
	I1217 10:54:24.792347 2974151 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1217 10:54:24.792486 2974151 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000331472s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 10:54:24.792573 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 10:54:25.209097 2974151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 10:54:25.222902 2974151 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 10:54:25.222960 2974151 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 10:54:25.231173 2974151 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 10:54:25.231182 2974151 kubeadm.go:158] found existing configuration files:
	
	I1217 10:54:25.231234 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 10:54:25.239239 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 10:54:25.239293 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 10:54:25.246851 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 10:54:25.254681 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 10:54:25.254734 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 10:54:25.262252 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 10:54:25.270359 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 10:54:25.270417 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 10:54:25.277936 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 10:54:25.286063 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 10:54:25.286121 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 10:54:25.293834 2974151 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 10:54:25.333226 2974151 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 10:54:25.333620 2974151 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 10:54:25.403386 2974151 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 10:54:25.403450 2974151 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 10:54:25.403488 2974151 kubeadm.go:319] OS: Linux
	I1217 10:54:25.403533 2974151 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 10:54:25.403579 2974151 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 10:54:25.403625 2974151 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 10:54:25.403672 2974151 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 10:54:25.403719 2974151 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 10:54:25.403765 2974151 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 10:54:25.403809 2974151 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 10:54:25.403855 2974151 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 10:54:25.403900 2974151 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 10:54:25.478252 2974151 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 10:54:25.478355 2974151 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 10:54:25.478445 2974151 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 10:54:25.483628 2974151 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 10:54:25.487136 2974151 out.go:252]   - Generating certificates and keys ...
	I1217 10:54:25.487234 2974151 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 10:54:25.487310 2974151 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 10:54:25.487433 2974151 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 10:54:25.487529 2974151 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 10:54:25.487605 2974151 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 10:54:25.487662 2974151 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 10:54:25.487729 2974151 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 10:54:25.487795 2974151 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 10:54:25.487917 2974151 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 10:54:25.487994 2974151 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 10:54:25.488380 2974151 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 10:54:25.488481 2974151 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 10:54:26.117291 2974151 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 10:54:26.756756 2974151 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 10:54:27.066378 2974151 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 10:54:27.235545 2974151 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 10:54:27.468773 2974151 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 10:54:27.469453 2974151 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 10:54:27.472021 2974151 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 10:54:27.475042 2974151 out.go:252]   - Booting up control plane ...
	I1217 10:54:27.475141 2974151 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 10:54:27.475225 2974151 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 10:54:27.475306 2974151 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 10:54:27.497360 2974151 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 10:54:27.497461 2974151 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 10:54:27.505167 2974151 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 10:54:27.506337 2974151 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 10:54:27.506384 2974151 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 10:54:27.645391 2974151 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 10:54:27.645508 2974151 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 10:58:27.644872 2974151 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000353032s
	I1217 10:58:27.644897 2974151 kubeadm.go:319] 
	I1217 10:58:27.644952 2974151 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 10:58:27.644984 2974151 kubeadm.go:319] 	- The kubelet is not running
	I1217 10:58:27.645087 2974151 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 10:58:27.645092 2974151 kubeadm.go:319] 
	I1217 10:58:27.645195 2974151 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 10:58:27.645226 2974151 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 10:58:27.645255 2974151 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 10:58:27.645258 2974151 kubeadm.go:319] 
	I1217 10:58:27.649050 2974151 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 10:58:27.649524 2974151 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 10:58:27.649634 2974151 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 10:58:27.649875 2974151 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 10:58:27.649881 2974151 kubeadm.go:319] 
	I1217 10:58:27.649949 2974151 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 10:58:27.650003 2974151 kubeadm.go:403] duration metric: took 12m6.936466746s to StartCluster
	I1217 10:58:27.650034 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:58:27.650094 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:58:27.678841 2974151 cri.go:89] found id: ""
	I1217 10:58:27.678855 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.678862 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:58:27.678868 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:58:27.678928 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:58:27.704494 2974151 cri.go:89] found id: ""
	I1217 10:58:27.704507 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.704514 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:58:27.704520 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:58:27.704578 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:58:27.729757 2974151 cri.go:89] found id: ""
	I1217 10:58:27.729770 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.729777 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:58:27.729783 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:58:27.729840 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:58:27.757253 2974151 cri.go:89] found id: ""
	I1217 10:58:27.757267 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.757274 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:58:27.757284 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:58:27.757343 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:58:27.781735 2974151 cri.go:89] found id: ""
	I1217 10:58:27.781749 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.781756 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:58:27.781760 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:58:27.781817 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:58:27.806628 2974151 cri.go:89] found id: ""
	I1217 10:58:27.806642 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.806649 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:58:27.806655 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:58:27.806713 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:58:27.831983 2974151 cri.go:89] found id: ""
	I1217 10:58:27.831997 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.832004 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:58:27.832013 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:58:27.832023 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:58:27.889768 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:58:27.889788 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:58:27.906789 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:58:27.906806 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:58:27.971294 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:58:27.963241   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.963807   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.965347   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.965829   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.967335   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:58:27.963241   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.963807   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.965347   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.965829   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.967335   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:58:27.971304 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:58:27.971317 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:58:28.034286 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:58:28.034308 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 10:58:28.076352 2974151 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000353032s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 10:58:28.076384 2974151 out.go:285] * 
	W1217 10:58:28.076460 2974151 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000353032s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 10:58:28.076478 2974151 out.go:285] * 
	W1217 10:58:28.078620 2974151 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 10:58:28.084354 2974151 out.go:203] 
	W1217 10:58:28.086597 2974151 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000353032s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 10:58:28.086645 2974151 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 10:58:28.086668 2974151 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 10:58:28.089656 2974151 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.366987997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367000042Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367054433Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367069325Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367089255Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367101152Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367110883Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367125414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367141668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367171452Z" level=info msg="Connect containerd service"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367467445Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.368062180Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.389242103Z" level=info msg="Start subscribing containerd event"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.389467722Z" level=info msg="Start recovering state"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.389473490Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.390097098Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430326171Z" level=info msg="Start event monitor"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430520850Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430594670Z" level=info msg="Start streaming server"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430655559Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430712788Z" level=info msg="runtime interface starting up..."
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430945234Z" level=info msg="starting plugins..."
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430989147Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.431326009Z" level=info msg="containerd successfully booted in 0.084806s"
	Dec 17 10:46:19 functional-232588 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:58:29.294713   20990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:29.295492   20990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:29.297127   20990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:29.297459   20990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:29.298978   20990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +26.532481] overlayfs: idmapped layers are currently not supported
	[Dec17 09:26] overlayfs: idmapped layers are currently not supported
	[Dec17 09:27] overlayfs: idmapped layers are currently not supported
	[Dec17 09:29] overlayfs: idmapped layers are currently not supported
	[Dec17 09:31] overlayfs: idmapped layers are currently not supported
	[Dec17 09:41] overlayfs: idmapped layers are currently not supported
	[Dec17 09:43] overlayfs: idmapped layers are currently not supported
	[Dec17 09:44] overlayfs: idmapped layers are currently not supported
	[  +5.066669] overlayfs: idmapped layers are currently not supported
	[ +38.827173] overlayfs: idmapped layers are currently not supported
	[Dec17 09:45] overlayfs: idmapped layers are currently not supported
	[Dec17 09:46] overlayfs: idmapped layers are currently not supported
	[Dec17 09:48] overlayfs: idmapped layers are currently not supported
	[  +5.468161] overlayfs: idmapped layers are currently not supported
	[Dec17 09:49] overlayfs: idmapped layers are currently not supported
	[  +4.263444] overlayfs: idmapped layers are currently not supported
	[Dec17 09:50] overlayfs: idmapped layers are currently not supported
	[Dec17 10:07] overlayfs: idmapped layers are currently not supported
	[Dec17 10:08] overlayfs: idmapped layers are currently not supported
	[Dec17 10:10] overlayfs: idmapped layers are currently not supported
	[Dec17 10:11] overlayfs: idmapped layers are currently not supported
	[Dec17 10:13] overlayfs: idmapped layers are currently not supported
	[Dec17 10:15] overlayfs: idmapped layers are currently not supported
	[Dec17 10:16] overlayfs: idmapped layers are currently not supported
	[Dec17 10:21] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 10:58:29 up 16:40,  0 user,  load average: 0.55, 0.27, 0.47
	Linux functional-232588 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 10:58:25 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:58:26 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 17 10:58:26 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:58:26 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:58:26 functional-232588 kubelet[20794]: E1217 10:58:26.545880   20794 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:58:26 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:58:26 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:58:27 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 17 10:58:27 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:58:27 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:58:27 functional-232588 kubelet[20799]: E1217 10:58:27.302867   20799 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:58:27 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:58:27 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:58:27 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 17 10:58:27 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:58:28 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:58:28 functional-232588 kubelet[20875]: E1217 10:58:28.090201   20875 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:58:28 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:58:28 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:58:28 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 17 10:58:28 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:58:28 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:58:28 functional-232588 kubelet[20906]: E1217 10:58:28.827036   20906 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:58:28 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:58:28 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588: exit status 2 (440.747836ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-232588" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (733.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (2.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-232588 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-232588 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (67.915237ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-232588 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-232588
helpers_test.go:244: (dbg) docker inspect functional-232588:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55",
	        "Created": "2025-12-17T10:31:38.417629873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2962990,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T10:31:38.484538313Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/hostname",
	        "HostsPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/hosts",
	        "LogPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55-json.log",
	        "Name": "/functional-232588",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-232588:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-232588",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55",
	                "LowerDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813-init/diff:/var/lib/docker/overlay2/aa1c3cb837db05afa9c265c464cc269fa9c11658f422c1c8858e1287ac952f12/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-232588",
	                "Source": "/var/lib/docker/volumes/functional-232588/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-232588",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-232588",
	                "name.minikube.sigs.k8s.io": "functional-232588",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cf91fdea6bf1c59282af014fad74b29d2456698ebca9b6be8c9685054b7d7df4",
	            "SandboxKey": "/var/run/docker/netns/cf91fdea6bf1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35733"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35734"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35737"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35735"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35736"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-232588": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:06:f9:5f:98:3e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf7a63e100f8f97f0cb760b53960a5eaeb1f5054bead79442486fc1d51c01ab7",
	                    "EndpointID": "a3f4d6de946fb68269c7790dce129934f895a840ec5cebbe87fc0d49cb575c44",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-232588",
	                        "f67a3fa8da99"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-232588 -n functional-232588
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232588 -n functional-232588: exit status 2 (295.42308ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                         ARGS                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-626013 image ls --format yaml --alsologtostderr                                                                                            │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ ssh     │ functional-626013 ssh pgrep buildkitd                                                                                                                 │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │                     │
	│ image   │ functional-626013 image ls --format json --alsologtostderr                                                                                            │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image   │ functional-626013 image ls --format table --alsologtostderr                                                                                           │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image   │ functional-626013 image build -t localhost/my-image:functional-626013 testdata/build --alsologtostderr                                                │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ image   │ functional-626013 image ls                                                                                                                            │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ delete  │ -p functional-626013                                                                                                                                  │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
	│ start   │ -p functional-232588 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │                     │
	│ start   │ -p functional-232588 --alsologtostderr -v=8                                                                                                           │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:39 UTC │                     │
	│ cache   │ functional-232588 cache add registry.k8s.io/pause:3.1                                                                                                 │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ functional-232588 cache add registry.k8s.io/pause:3.3                                                                                                 │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ functional-232588 cache add registry.k8s.io/pause:latest                                                                                              │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ functional-232588 cache add minikube-local-cache-test:functional-232588                                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ functional-232588 cache delete minikube-local-cache-test:functional-232588                                                                            │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ list                                                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ ssh     │ functional-232588 ssh sudo crictl images                                                                                                              │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ ssh     │ functional-232588 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                    │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ ssh     │ functional-232588 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │                     │
	│ cache   │ functional-232588 cache reload                                                                                                                        │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ ssh     │ functional-232588 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                   │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ kubectl │ functional-232588 kubectl -- --context functional-232588 get pods                                                                                     │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │                     │
	│ start   │ -p functional-232588 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                              │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 10:46:16
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 10:46:16.812860 2974151 out.go:360] Setting OutFile to fd 1 ...
	I1217 10:46:16.812963 2974151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:46:16.813007 2974151 out.go:374] Setting ErrFile to fd 2...
	I1217 10:46:16.813012 2974151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:46:16.813266 2974151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 10:46:16.813634 2974151 out.go:368] Setting JSON to false
	I1217 10:46:16.814461 2974151 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":59327,"bootTime":1765909050,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 10:46:16.814519 2974151 start.go:143] virtualization:  
	I1217 10:46:16.818066 2974151 out.go:179] * [functional-232588] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 10:46:16.822068 2974151 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 10:46:16.822151 2974151 notify.go:221] Checking for updates...
	I1217 10:46:16.828253 2974151 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 10:46:16.831316 2974151 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:46:16.834373 2974151 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 10:46:16.837375 2974151 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 10:46:16.840310 2974151 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 10:46:16.843753 2974151 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 10:46:16.843853 2974151 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 10:46:16.873076 2974151 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 10:46:16.873190 2974151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:46:16.938275 2974151 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 10:46:16.928760564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:46:16.938365 2974151 docker.go:319] overlay module found
	I1217 10:46:16.941603 2974151 out.go:179] * Using the docker driver based on existing profile
	I1217 10:46:16.944540 2974151 start.go:309] selected driver: docker
	I1217 10:46:16.944578 2974151 start.go:927] validating driver "docker" against &{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:46:16.944677 2974151 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 10:46:16.944788 2974151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:46:17.021027 2974151 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 10:46:17.010774366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:46:17.021436 2974151 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 10:46:17.021458 2974151 cni.go:84] Creating CNI manager for ""
	I1217 10:46:17.021510 2974151 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 10:46:17.021561 2974151 start.go:353] cluster config:
	{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:46:17.024793 2974151 out.go:179] * Starting "functional-232588" primary control-plane node in "functional-232588" cluster
	I1217 10:46:17.027565 2974151 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 10:46:17.030993 2974151 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 10:46:17.033790 2974151 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 10:46:17.033824 2974151 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1217 10:46:17.033833 2974151 cache.go:65] Caching tarball of preloaded images
	I1217 10:46:17.033918 2974151 preload.go:238] Found /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 10:46:17.033926 2974151 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1217 10:46:17.034031 2974151 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/config.json ...
	I1217 10:46:17.034251 2974151 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 10:46:17.058099 2974151 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 10:46:17.058112 2974151 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 10:46:17.058125 2974151 cache.go:243] Successfully downloaded all kic artifacts
	I1217 10:46:17.058155 2974151 start.go:360] acquireMachinesLock for functional-232588: {Name:mkb7828f32963a62377c74058da795e63eb677f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 10:46:17.058219 2974151 start.go:364] duration metric: took 48.59µs to acquireMachinesLock for "functional-232588"
	I1217 10:46:17.058239 2974151 start.go:96] Skipping create...Using existing machine configuration
	I1217 10:46:17.058243 2974151 fix.go:54] fixHost starting: 
	I1217 10:46:17.058504 2974151 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:46:17.079212 2974151 fix.go:112] recreateIfNeeded on functional-232588: state=Running err=<nil>
	W1217 10:46:17.079241 2974151 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 10:46:17.082582 2974151 out.go:252] * Updating the running docker "functional-232588" container ...
	I1217 10:46:17.082612 2974151 machine.go:94] provisionDockerMachine start ...
	I1217 10:46:17.082696 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:17.100077 2974151 main.go:143] libmachine: Using SSH client type: native
	I1217 10:46:17.100208 2974151 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:46:17.100214 2974151 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 10:46:17.228063 2974151 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232588
	
	I1217 10:46:17.228077 2974151 ubuntu.go:182] provisioning hostname "functional-232588"
	I1217 10:46:17.228138 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:17.245852 2974151 main.go:143] libmachine: Using SSH client type: native
	I1217 10:46:17.245963 2974151 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:46:17.245971 2974151 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-232588 && echo "functional-232588" | sudo tee /etc/hostname
	I1217 10:46:17.390208 2974151 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232588
	
	I1217 10:46:17.390287 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:17.409213 2974151 main.go:143] libmachine: Using SSH client type: native
	I1217 10:46:17.409321 2974151 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:46:17.409335 2974151 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-232588' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-232588/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-232588' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 10:46:17.545048 2974151 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 10:46:17.545065 2974151 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 10:46:17.545093 2974151 ubuntu.go:190] setting up certificates
	I1217 10:46:17.545101 2974151 provision.go:84] configureAuth start
	I1217 10:46:17.545170 2974151 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232588
	I1217 10:46:17.563036 2974151 provision.go:143] copyHostCerts
	I1217 10:46:17.563100 2974151 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 10:46:17.563107 2974151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 10:46:17.563182 2974151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 10:46:17.563277 2974151 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 10:46:17.563281 2974151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 10:46:17.563306 2974151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 10:46:17.563356 2974151 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 10:46:17.563359 2974151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 10:46:17.563381 2974151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 10:46:17.563426 2974151 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.functional-232588 san=[127.0.0.1 192.168.49.2 functional-232588 localhost minikube]
	I1217 10:46:17.716164 2974151 provision.go:177] copyRemoteCerts
	I1217 10:46:17.716219 2974151 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 10:46:17.716261 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:17.737388 2974151 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:46:17.836120 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 10:46:17.853626 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 10:46:17.870501 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 10:46:17.888326 2974151 provision.go:87] duration metric: took 343.201911ms to configureAuth
	I1217 10:46:17.888344 2974151 ubuntu.go:206] setting minikube options for container-runtime
	I1217 10:46:17.888621 2974151 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 10:46:17.888627 2974151 machine.go:97] duration metric: took 806.010876ms to provisionDockerMachine
	I1217 10:46:17.888635 2974151 start.go:293] postStartSetup for "functional-232588" (driver="docker")
	I1217 10:46:17.888646 2974151 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 10:46:17.888710 2974151 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 10:46:17.888750 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:17.905996 2974151 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:46:18.000491 2974151 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 10:46:18.012109 2974151 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 10:46:18.012146 2974151 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 10:46:18.012158 2974151 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 10:46:18.012224 2974151 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 10:46:18.012302 2974151 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 10:46:18.012378 2974151 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts -> hosts in /etc/test/nested/copy/2924574
	I1217 10:46:18.012531 2974151 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2924574
	I1217 10:46:18.021349 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 10:46:18.041286 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts --> /etc/test/nested/copy/2924574/hosts (40 bytes)
	I1217 10:46:18.060319 2974151 start.go:296] duration metric: took 171.669118ms for postStartSetup
	I1217 10:46:18.060436 2974151 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 10:46:18.060478 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:18.080470 2974151 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:46:18.173527 2974151 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 10:46:18.178353 2974151 fix.go:56] duration metric: took 1.120102504s for fixHost
	I1217 10:46:18.178370 2974151 start.go:83] releasing machines lock for "functional-232588", held for 1.120143316s
	I1217 10:46:18.178439 2974151 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232588
	I1217 10:46:18.195096 2974151 ssh_runner.go:195] Run: cat /version.json
	I1217 10:46:18.195136 2974151 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 10:46:18.195139 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:18.195194 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:18.218089 2974151 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:46:18.224561 2974151 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:46:18.312237 2974151 ssh_runner.go:195] Run: systemctl --version
	I1217 10:46:18.401982 2974151 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 10:46:18.406442 2974151 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 10:46:18.406503 2974151 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 10:46:18.414452 2974151 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 10:46:18.414475 2974151 start.go:496] detecting cgroup driver to use...
	I1217 10:46:18.414504 2974151 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 10:46:18.414555 2974151 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 10:46:18.437080 2974151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 10:46:18.453263 2974151 docker.go:218] disabling cri-docker service (if available) ...
	I1217 10:46:18.453314 2974151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 10:46:18.469891 2974151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 10:46:18.484540 2974151 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 10:46:18.608866 2974151 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 10:46:18.727258 2974151 docker.go:234] disabling docker service ...
	I1217 10:46:18.727333 2974151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 10:46:18.742532 2974151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 10:46:18.755933 2974151 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 10:46:18.876736 2974151 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 10:46:18.997189 2974151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 10:46:19.012062 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 10:46:19.033558 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 10:46:19.046193 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 10:46:19.056269 2974151 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 10:46:19.056333 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 10:46:19.066650 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 10:46:19.076242 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 10:46:19.086026 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 10:46:19.095009 2974151 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 10:46:19.103467 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 10:46:19.112970 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 10:46:19.121805 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 10:46:19.131086 2974151 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 10:46:19.139081 2974151 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 10:46:19.146487 2974151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 10:46:19.293215 2974151 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 10:46:19.434655 2974151 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 10:46:19.434715 2974151 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 10:46:19.439246 2974151 start.go:564] Will wait 60s for crictl version
	I1217 10:46:19.439314 2974151 ssh_runner.go:195] Run: which crictl
	I1217 10:46:19.442915 2974151 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 10:46:19.467445 2974151 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 10:46:19.467506 2974151 ssh_runner.go:195] Run: containerd --version
	I1217 10:46:19.489544 2974151 ssh_runner.go:195] Run: containerd --version
	I1217 10:46:19.516185 2974151 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1217 10:46:19.519114 2974151 cli_runner.go:164] Run: docker network inspect functional-232588 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 10:46:19.535732 2974151 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 10:46:19.542843 2974151 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1217 10:46:19.545647 2974151 kubeadm.go:884] updating cluster {Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 10:46:19.545821 2974151 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 10:46:19.545902 2974151 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 10:46:19.570156 2974151 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 10:46:19.570167 2974151 containerd.go:534] Images already preloaded, skipping extraction
	I1217 10:46:19.570223 2974151 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 10:46:19.598013 2974151 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 10:46:19.598025 2974151 cache_images.go:86] Images are preloaded, skipping loading
	I1217 10:46:19.598031 2974151 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1217 10:46:19.598133 2974151 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-232588 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 10:46:19.598195 2974151 ssh_runner.go:195] Run: sudo crictl info
	I1217 10:46:19.628150 2974151 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1217 10:46:19.628169 2974151 cni.go:84] Creating CNI manager for ""
	I1217 10:46:19.628176 2974151 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 10:46:19.628184 2974151 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 10:46:19.628205 2974151 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-232588 NodeName:functional-232588 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubelet
ConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 10:46:19.628313 2974151 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-232588"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 10:46:19.628380 2974151 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 10:46:19.636242 2974151 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 10:46:19.636301 2974151 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 10:46:19.643919 2974151 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1217 10:46:19.658022 2974151 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 10:46:19.670961 2974151 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2085 bytes)
	I1217 10:46:19.684065 2974151 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 10:46:19.687947 2974151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 10:46:19.796384 2974151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 10:46:20.002745 2974151 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588 for IP: 192.168.49.2
	I1217 10:46:20.002759 2974151 certs.go:195] generating shared ca certs ...
	I1217 10:46:20.002799 2974151 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:46:20.002998 2974151 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 10:46:20.003055 2974151 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 10:46:20.003062 2974151 certs.go:257] generating profile certs ...
	I1217 10:46:20.003183 2974151 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.key
	I1217 10:46:20.003236 2974151 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key.a39919a0
	I1217 10:46:20.003288 2974151 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key
	I1217 10:46:20.003444 2974151 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 10:46:20.003480 2974151 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 10:46:20.003508 2974151 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 10:46:20.003545 2974151 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 10:46:20.003577 2974151 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 10:46:20.003610 2974151 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 10:46:20.003665 2974151 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 10:46:20.004449 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 10:46:20.040127 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 10:46:20.065442 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 10:46:20.086611 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 10:46:20.107054 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 10:46:20.126007 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 10:46:20.144078 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 10:46:20.162802 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 10:46:20.181368 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 10:46:20.200073 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 10:46:20.217945 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 10:46:20.235640 2974151 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 10:46:20.248545 2974151 ssh_runner.go:195] Run: openssl version
	I1217 10:46:20.256076 2974151 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 10:46:20.263759 2974151 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 10:46:20.271126 2974151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 10:46:20.274974 2974151 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 10:46:20.275038 2974151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 10:46:20.316429 2974151 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 10:46:20.323945 2974151 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 10:46:20.331201 2974151 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 10:46:20.339536 2974151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 10:46:20.343551 2974151 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 10:46:20.343606 2974151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 10:46:20.384485 2974151 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 10:46:20.391694 2974151 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:46:20.399044 2974151 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 10:46:20.406332 2974151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:46:20.410078 2974151 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:46:20.410134 2974151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:46:20.451203 2974151 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 10:46:20.458641 2974151 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 10:46:20.462247 2974151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 10:46:20.503114 2974151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 10:46:20.544335 2974151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 10:46:20.590045 2974151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 10:46:20.630985 2974151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 10:46:20.672580 2974151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 10:46:20.713547 2974151 kubeadm.go:401] StartCluster: {Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:46:20.713638 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 10:46:20.713707 2974151 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 10:46:20.740007 2974151 cri.go:89] found id: ""
	I1217 10:46:20.740065 2974151 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 10:46:20.747914 2974151 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 10:46:20.747924 2974151 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 10:46:20.747974 2974151 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 10:46:20.757908 2974151 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 10:46:20.758430 2974151 kubeconfig.go:125] found "functional-232588" server: "https://192.168.49.2:8441"
	I1217 10:46:20.761036 2974151 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 10:46:20.769414 2974151 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 10:31:46.081162571 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 10:46:19.676908670 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1217 10:46:20.769441 2974151 kubeadm.go:1161] stopping kube-system containers ...
	I1217 10:46:20.769455 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1217 10:46:20.769528 2974151 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 10:46:20.801226 2974151 cri.go:89] found id: ""
	I1217 10:46:20.801308 2974151 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 10:46:20.820664 2974151 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 10:46:20.829373 2974151 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 17 10:35 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 17 10:35 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 17 10:35 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 17 10:35 /etc/kubernetes/scheduler.conf
	
	I1217 10:46:20.829433 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 10:46:20.837325 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 10:46:20.845308 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 10:46:20.845363 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 10:46:20.853199 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 10:46:20.860841 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 10:46:20.860897 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 10:46:20.868346 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 10:46:20.876151 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 10:46:20.876211 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 10:46:20.883945 2974151 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 10:46:20.892018 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 10:46:20.938748 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 10:46:22.162130 2974151 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.22335875s)
	I1217 10:46:22.162221 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 10:46:22.359829 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 10:46:22.415930 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 10:46:22.468185 2974151 api_server.go:52] waiting for apiserver process to appear ...
	I1217 10:46:22.468265 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:22.969146 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:23.468479 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:23.968514 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:24.468479 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:24.969355 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:25.469200 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:25.969018 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:26.468818 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:26.969109 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:27.468378 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:27.969311 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:28.469065 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:28.969101 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:29.468403 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:29.968443 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:30.468499 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:30.968729 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:31.468355 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:31.968496 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:32.468560 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:32.968509 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:33.469088 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:33.969160 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:34.468498 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:34.968497 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:35.468823 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:35.968410 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:36.469195 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:36.969040 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:37.469267 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:37.969122 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:38.469239 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:38.969263 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:39.469144 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:39.969429 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:40.468520 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:40.968559 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:41.469268 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:41.968407 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:42.469044 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:42.969148 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:43.468399 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:43.968478 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:44.468402 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:44.969211 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:45.469415 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:45.968355 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:46.468347 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:46.969243 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:47.468650 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:47.969320 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:48.469355 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:48.969346 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:49.469299 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:49.968561 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:50.469414 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:50.968570 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:51.468468 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:51.969383 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:52.468402 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:52.969191 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:53.469310 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:53.969186 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:54.469057 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:54.968491 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:55.469204 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:55.968499 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:56.468579 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:56.968537 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:57.468523 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:57.968481 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:58.468521 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:58.969320 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:59.469211 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:59.968498 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:00.468441 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:00.969123 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:01.468956 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:01.969376 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:02.468446 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:02.969237 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:03.468449 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:03.969079 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:04.469054 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:04.968610 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:05.468502 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:05.968334 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:06.469020 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:06.969077 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:07.469052 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:07.968481 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:08.469171 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:08.968586 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:09.469235 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:09.968478 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:10.469198 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:10.968403 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:11.469192 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:11.969439 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:12.469344 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:12.969231 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:13.469196 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:13.969169 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:14.469322 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:14.969138 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:15.469310 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:15.969247 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:16.469080 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:16.968869 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:17.468522 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:17.968551 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:18.468369 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:18.969356 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:19.469354 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:19.969205 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:20.469085 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:20.968997 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:21.468670 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:21.969358 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:22.469259 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:22.469337 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:22.493873 2974151 cri.go:89] found id: ""
	I1217 10:47:22.493887 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.493894 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:22.493901 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:22.493960 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:22.522462 2974151 cri.go:89] found id: ""
	I1217 10:47:22.522476 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.522483 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:22.522488 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:22.522547 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:22.550878 2974151 cri.go:89] found id: ""
	I1217 10:47:22.550892 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.550899 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:22.550904 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:22.550964 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:22.576167 2974151 cri.go:89] found id: ""
	I1217 10:47:22.576181 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.576188 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:22.576193 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:22.576253 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:22.600591 2974151 cri.go:89] found id: ""
	I1217 10:47:22.600605 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.600612 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:22.600617 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:22.600673 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:22.624978 2974151 cri.go:89] found id: ""
	I1217 10:47:22.624992 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.624999 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:22.625005 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:22.625062 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:22.649387 2974151 cri.go:89] found id: ""
	I1217 10:47:22.649401 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.649408 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:22.649415 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:22.649427 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:22.666544 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:22.666563 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:22.733635 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:22.724930   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.725595   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.727257   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.727857   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.729508   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:22.724930   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.725595   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.727257   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.727857   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.729508   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:22.733647 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:22.733658 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:22.802118 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:22.802139 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:22.842645 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:22.842661 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:25.403296 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:25.413370 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:25.413431 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:25.437778 2974151 cri.go:89] found id: ""
	I1217 10:47:25.437792 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.437799 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:25.437804 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:25.437864 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:25.466932 2974151 cri.go:89] found id: ""
	I1217 10:47:25.466946 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.466953 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:25.466959 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:25.467017 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:25.495887 2974151 cri.go:89] found id: ""
	I1217 10:47:25.495901 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.495907 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:25.495912 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:25.495971 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:25.521061 2974151 cri.go:89] found id: ""
	I1217 10:47:25.521075 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.521082 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:25.521087 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:25.521146 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:25.550884 2974151 cri.go:89] found id: ""
	I1217 10:47:25.550898 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.550905 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:25.550910 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:25.550967 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:25.576130 2974151 cri.go:89] found id: ""
	I1217 10:47:25.576145 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.576151 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:25.576156 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:25.576224 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:25.600903 2974151 cri.go:89] found id: ""
	I1217 10:47:25.600916 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.600923 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:25.600931 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:25.600941 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:25.633359 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:25.633375 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:25.689492 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:25.689512 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:25.706643 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:25.706661 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:25.788195 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:25.780730   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.781147   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.782587   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.782886   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.784365   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:25.780730   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.781147   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.782587   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.782886   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.784365   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:25.788207 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:25.788218 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:28.357987 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:28.368310 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:28.368371 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:28.393766 2974151 cri.go:89] found id: ""
	I1217 10:47:28.393789 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.393797 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:28.393803 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:28.393876 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:28.418225 2974151 cri.go:89] found id: ""
	I1217 10:47:28.418240 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.418247 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:28.418253 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:28.418312 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:28.444064 2974151 cri.go:89] found id: ""
	I1217 10:47:28.444083 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.444091 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:28.444096 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:28.444157 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:28.469125 2974151 cri.go:89] found id: ""
	I1217 10:47:28.469139 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.469146 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:28.469152 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:28.469210 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:28.494598 2974151 cri.go:89] found id: ""
	I1217 10:47:28.494614 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.494621 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:28.494627 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:28.494689 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:28.529767 2974151 cri.go:89] found id: ""
	I1217 10:47:28.529781 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.529788 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:28.529793 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:28.529851 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:28.554626 2974151 cri.go:89] found id: ""
	I1217 10:47:28.554640 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.554653 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:28.554661 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:28.554671 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:28.610665 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:28.610693 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:28.627829 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:28.627846 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:28.694227 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:28.685909   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.686688   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.688310   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.688904   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.690427   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:28.685909   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.686688   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.688310   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.688904   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.690427   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:28.694247 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:28.694257 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:28.761980 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:28.761999 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:31.299127 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:31.309358 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:31.309418 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:31.334436 2974151 cri.go:89] found id: ""
	I1217 10:47:31.334450 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.334458 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:31.334463 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:31.334530 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:31.359180 2974151 cri.go:89] found id: ""
	I1217 10:47:31.359195 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.359202 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:31.359207 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:31.359264 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:31.386298 2974151 cri.go:89] found id: ""
	I1217 10:47:31.386312 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.386319 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:31.386324 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:31.386385 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:31.414747 2974151 cri.go:89] found id: ""
	I1217 10:47:31.414762 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.414769 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:31.414774 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:31.414835 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:31.439979 2974151 cri.go:89] found id: ""
	I1217 10:47:31.439993 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.439999 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:31.440005 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:31.440061 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:31.465613 2974151 cri.go:89] found id: ""
	I1217 10:47:31.465628 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.465635 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:31.465641 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:31.465698 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:31.495303 2974151 cri.go:89] found id: ""
	I1217 10:47:31.495317 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.495324 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:31.495332 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:31.495347 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:31.551359 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:31.551380 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:31.568339 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:31.568356 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:31.631156 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:31.622217   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.623260   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.624240   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.625368   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.626068   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:31.622217   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.623260   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.624240   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.625368   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.626068   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:31.631168 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:31.631179 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:31.694344 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:31.694364 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:34.224306 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:34.234549 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:34.234609 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:34.262893 2974151 cri.go:89] found id: ""
	I1217 10:47:34.262907 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.262913 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:34.262919 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:34.262974 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:34.287865 2974151 cri.go:89] found id: ""
	I1217 10:47:34.287880 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.287887 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:34.287892 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:34.287971 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:34.314130 2974151 cri.go:89] found id: ""
	I1217 10:47:34.314144 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.314151 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:34.314157 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:34.314213 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:34.338080 2974151 cri.go:89] found id: ""
	I1217 10:47:34.338094 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.338101 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:34.338106 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:34.338167 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:34.366907 2974151 cri.go:89] found id: ""
	I1217 10:47:34.366922 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.366929 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:34.366934 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:34.367005 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:34.394628 2974151 cri.go:89] found id: ""
	I1217 10:47:34.394642 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.394650 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:34.394655 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:34.394718 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:34.422575 2974151 cri.go:89] found id: ""
	I1217 10:47:34.422590 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.422597 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:34.422605 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:34.422615 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:34.478427 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:34.478445 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:34.495399 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:34.495416 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:34.567591 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:34.559443   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.560218   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.561959   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.562370   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.563927   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:34.559443   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.560218   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.561959   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.562370   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.563927   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:34.567600 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:34.567611 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:34.629987 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:34.630008 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:37.172568 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:37.185167 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:37.185227 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:37.209648 2974151 cri.go:89] found id: ""
	I1217 10:47:37.209662 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.209669 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:37.209674 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:37.209734 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:37.239202 2974151 cri.go:89] found id: ""
	I1217 10:47:37.239216 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.239223 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:37.239229 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:37.239287 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:37.264777 2974151 cri.go:89] found id: ""
	I1217 10:47:37.264791 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.264798 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:37.264803 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:37.264870 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:37.290195 2974151 cri.go:89] found id: ""
	I1217 10:47:37.290209 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.290216 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:37.290221 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:37.290277 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:37.315019 2974151 cri.go:89] found id: ""
	I1217 10:47:37.315033 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.315040 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:37.315046 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:37.315116 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:37.339319 2974151 cri.go:89] found id: ""
	I1217 10:47:37.339333 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.339340 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:37.339345 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:37.339407 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:37.365996 2974151 cri.go:89] found id: ""
	I1217 10:47:37.366010 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.366017 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:37.366024 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:37.366034 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:37.382805 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:37.382824 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:37.447944 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:37.439827   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.440553   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.442220   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.442682   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.444195   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:37.439827   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.440553   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.442220   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.442682   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.444195   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:37.447955 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:37.447966 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:37.510276 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:37.510298 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:37.540200 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:37.540215 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:40.105556 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:40.119775 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:40.119860 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:40.144817 2974151 cri.go:89] found id: ""
	I1217 10:47:40.144832 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.144839 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:40.144844 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:40.144908 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:40.169663 2974151 cri.go:89] found id: ""
	I1217 10:47:40.169676 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.169683 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:40.169688 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:40.169745 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:40.194821 2974151 cri.go:89] found id: ""
	I1217 10:47:40.194835 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.194842 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:40.194847 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:40.194909 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:40.222839 2974151 cri.go:89] found id: ""
	I1217 10:47:40.222853 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.222860 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:40.222866 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:40.222940 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:40.247991 2974151 cri.go:89] found id: ""
	I1217 10:47:40.248005 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.248012 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:40.248017 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:40.248075 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:40.272758 2974151 cri.go:89] found id: ""
	I1217 10:47:40.272772 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.272778 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:40.272783 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:40.272844 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:40.298276 2974151 cri.go:89] found id: ""
	I1217 10:47:40.298290 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.298297 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:40.298305 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:40.298316 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:40.314934 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:40.314950 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:40.379519 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:40.371790   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.372215   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.373688   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.374125   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.375622   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:40.371790   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.372215   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.373688   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.374125   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.375622   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:40.379532 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:40.379544 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:40.442308 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:40.442328 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:40.471269 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:40.471287 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:43.030145 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:43.043645 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:43.043715 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:43.081236 2974151 cri.go:89] found id: ""
	I1217 10:47:43.081250 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.081257 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:43.081262 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:43.081326 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:43.115370 2974151 cri.go:89] found id: ""
	I1217 10:47:43.115384 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.115390 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:43.115399 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:43.115462 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:43.140373 2974151 cri.go:89] found id: ""
	I1217 10:47:43.140387 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.140395 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:43.140400 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:43.140480 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:43.166855 2974151 cri.go:89] found id: ""
	I1217 10:47:43.166870 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.166877 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:43.166883 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:43.166941 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:43.191839 2974151 cri.go:89] found id: ""
	I1217 10:47:43.191854 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.191861 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:43.191866 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:43.191927 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:43.217632 2974151 cri.go:89] found id: ""
	I1217 10:47:43.217652 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.217659 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:43.217664 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:43.217725 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:43.242042 2974151 cri.go:89] found id: ""
	I1217 10:47:43.242056 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.242064 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:43.242071 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:43.242081 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:43.299602 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:43.299621 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:43.316995 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:43.317012 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:43.381195 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:43.373241   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.374026   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.375639   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.375964   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.377408   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:43.373241   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.374026   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.375639   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.375964   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.377408   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:43.381206 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:43.381217 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:43.443981 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:43.444003 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:45.975295 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:45.985580 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:45.985639 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:46.020412 2974151 cri.go:89] found id: ""
	I1217 10:47:46.020446 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.020454 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:46.020460 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:46.020529 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:46.056724 2974151 cri.go:89] found id: ""
	I1217 10:47:46.056739 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.056755 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:46.056762 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:46.056823 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:46.087796 2974151 cri.go:89] found id: ""
	I1217 10:47:46.087811 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.087818 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:46.087844 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:46.087924 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:46.112453 2974151 cri.go:89] found id: ""
	I1217 10:47:46.112467 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.112475 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:46.112480 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:46.112539 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:46.141019 2974151 cri.go:89] found id: ""
	I1217 10:47:46.141034 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.141041 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:46.141047 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:46.141103 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:46.165608 2974151 cri.go:89] found id: ""
	I1217 10:47:46.165621 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.165628 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:46.165634 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:46.165691 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:46.192283 2974151 cri.go:89] found id: ""
	I1217 10:47:46.192307 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.192315 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:46.192323 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:46.192335 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:46.255412 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:46.255435 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:46.287390 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:46.287406 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:46.344424 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:46.344442 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:46.361344 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:46.361361 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:46.424398 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:46.416182   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.416923   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.418495   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.418798   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.420304   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:46.416182   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.416923   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.418495   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.418798   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.420304   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:48.924647 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:48.934813 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:48.934877 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:48.959135 2974151 cri.go:89] found id: ""
	I1217 10:47:48.959159 2974151 logs.go:282] 0 containers: []
	W1217 10:47:48.959166 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:48.959172 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:48.959241 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:48.983610 2974151 cri.go:89] found id: ""
	I1217 10:47:48.983632 2974151 logs.go:282] 0 containers: []
	W1217 10:47:48.983640 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:48.983645 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:48.983714 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:49.026685 2974151 cri.go:89] found id: ""
	I1217 10:47:49.026700 2974151 logs.go:282] 0 containers: []
	W1217 10:47:49.026707 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:49.026713 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:49.026773 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:49.060861 2974151 cri.go:89] found id: ""
	I1217 10:47:49.060876 2974151 logs.go:282] 0 containers: []
	W1217 10:47:49.060883 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:49.060890 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:49.060950 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:49.090198 2974151 cri.go:89] found id: ""
	I1217 10:47:49.090213 2974151 logs.go:282] 0 containers: []
	W1217 10:47:49.090221 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:49.090226 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:49.090288 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:49.119661 2974151 cri.go:89] found id: ""
	I1217 10:47:49.119676 2974151 logs.go:282] 0 containers: []
	W1217 10:47:49.119683 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:49.119689 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:49.119812 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:49.148486 2974151 cri.go:89] found id: ""
	I1217 10:47:49.148500 2974151 logs.go:282] 0 containers: []
	W1217 10:47:49.148507 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:49.148515 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:49.148525 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:49.212250 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:49.212271 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:49.240975 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:49.240993 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:49.299733 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:49.299756 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:49.316863 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:49.316882 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:49.387132 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:49.378625   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.379410   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.381103   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.381692   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.383302   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:49.378625   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.379410   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.381103   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.381692   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.383302   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:51.888132 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:51.898751 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:51.898816 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:51.932795 2974151 cri.go:89] found id: ""
	I1217 10:47:51.932815 2974151 logs.go:282] 0 containers: []
	W1217 10:47:51.932827 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:51.932833 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:51.932896 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:51.963357 2974151 cri.go:89] found id: ""
	I1217 10:47:51.963371 2974151 logs.go:282] 0 containers: []
	W1217 10:47:51.963378 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:51.963384 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:51.963448 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:51.988757 2974151 cri.go:89] found id: ""
	I1217 10:47:51.988778 2974151 logs.go:282] 0 containers: []
	W1217 10:47:51.988785 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:51.988790 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:51.988850 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:52.028153 2974151 cri.go:89] found id: ""
	I1217 10:47:52.028167 2974151 logs.go:282] 0 containers: []
	W1217 10:47:52.028174 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:52.028180 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:52.028244 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:52.063954 2974151 cri.go:89] found id: ""
	I1217 10:47:52.063968 2974151 logs.go:282] 0 containers: []
	W1217 10:47:52.063975 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:52.063980 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:52.064038 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:52.098500 2974151 cri.go:89] found id: ""
	I1217 10:47:52.098514 2974151 logs.go:282] 0 containers: []
	W1217 10:47:52.098521 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:52.098527 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:52.098587 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:52.130345 2974151 cri.go:89] found id: ""
	I1217 10:47:52.130359 2974151 logs.go:282] 0 containers: []
	W1217 10:47:52.130366 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:52.130374 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:52.130384 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:52.189106 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:52.189126 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:52.207475 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:52.207493 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:52.271884 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:52.263636   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.264410   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.265990   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.266498   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.267978   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:52.263636   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.264410   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.265990   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.266498   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.267978   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:52.271903 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:52.271914 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:52.334484 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:52.334504 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:54.867624 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:54.877729 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:54.877789 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:54.902223 2974151 cri.go:89] found id: ""
	I1217 10:47:54.902237 2974151 logs.go:282] 0 containers: []
	W1217 10:47:54.902244 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:54.902250 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:54.902312 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:54.927795 2974151 cri.go:89] found id: ""
	I1217 10:47:54.927810 2974151 logs.go:282] 0 containers: []
	W1217 10:47:54.927817 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:54.927823 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:54.927888 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:54.954800 2974151 cri.go:89] found id: ""
	I1217 10:47:54.954816 2974151 logs.go:282] 0 containers: []
	W1217 10:47:54.954823 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:54.954829 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:54.954888 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:54.980005 2974151 cri.go:89] found id: ""
	I1217 10:47:54.980018 2974151 logs.go:282] 0 containers: []
	W1217 10:47:54.980025 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:54.980030 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:54.980093 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:55.013092 2974151 cri.go:89] found id: ""
	I1217 10:47:55.013107 2974151 logs.go:282] 0 containers: []
	W1217 10:47:55.013115 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:55.013121 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:55.013191 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:55.050531 2974151 cri.go:89] found id: ""
	I1217 10:47:55.050545 2974151 logs.go:282] 0 containers: []
	W1217 10:47:55.050552 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:55.050557 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:55.050619 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:55.090230 2974151 cri.go:89] found id: ""
	I1217 10:47:55.090245 2974151 logs.go:282] 0 containers: []
	W1217 10:47:55.090252 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:55.090260 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:55.090270 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:55.153444 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:55.153464 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:55.185504 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:55.185520 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:55.242466 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:55.242485 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:55.260631 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:55.260648 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:55.331030 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:55.322930   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.323475   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.324828   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.325446   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.327095   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:55.322930   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.323475   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.324828   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.325446   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.327095   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:57.831262 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:57.841170 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:57.841234 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:57.869513 2974151 cri.go:89] found id: ""
	I1217 10:47:57.869529 2974151 logs.go:282] 0 containers: []
	W1217 10:47:57.869536 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:57.869542 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:57.869602 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:57.898410 2974151 cri.go:89] found id: ""
	I1217 10:47:57.898424 2974151 logs.go:282] 0 containers: []
	W1217 10:47:57.898431 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:57.898437 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:57.898497 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:57.926916 2974151 cri.go:89] found id: ""
	I1217 10:47:57.926931 2974151 logs.go:282] 0 containers: []
	W1217 10:47:57.926938 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:57.926944 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:57.927008 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:57.956754 2974151 cri.go:89] found id: ""
	I1217 10:47:57.956768 2974151 logs.go:282] 0 containers: []
	W1217 10:47:57.956775 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:57.956780 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:57.956840 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:57.981614 2974151 cri.go:89] found id: ""
	I1217 10:47:57.981629 2974151 logs.go:282] 0 containers: []
	W1217 10:47:57.981636 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:57.981642 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:57.981701 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:58.021825 2974151 cri.go:89] found id: ""
	I1217 10:47:58.021839 2974151 logs.go:282] 0 containers: []
	W1217 10:47:58.021846 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:58.021852 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:58.021924 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:58.055082 2974151 cri.go:89] found id: ""
	I1217 10:47:58.055097 2974151 logs.go:282] 0 containers: []
	W1217 10:47:58.055104 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:58.055111 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:58.055120 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:58.117865 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:58.117887 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:58.136280 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:58.136297 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:58.204520 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:58.195962   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.196685   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.198489   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.199076   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.200656   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:58.195962   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.196685   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.198489   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.199076   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.200656   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:58.204540 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:58.204551 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:58.267689 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:58.267713 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:00.795803 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:00.807186 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:00.807252 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:00.833048 2974151 cri.go:89] found id: ""
	I1217 10:48:00.833062 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.833069 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:00.833074 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:00.833136 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:00.863311 2974151 cri.go:89] found id: ""
	I1217 10:48:00.863325 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.863332 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:00.863338 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:00.863398 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:00.887857 2974151 cri.go:89] found id: ""
	I1217 10:48:00.887871 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.887877 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:00.887883 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:00.887940 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:00.913735 2974151 cri.go:89] found id: ""
	I1217 10:48:00.913749 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.913756 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:00.913762 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:00.913824 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:00.938305 2974151 cri.go:89] found id: ""
	I1217 10:48:00.938319 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.938327 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:00.938333 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:00.938390 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:00.963900 2974151 cri.go:89] found id: ""
	I1217 10:48:00.963914 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.963920 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:00.963925 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:00.963985 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:00.990708 2974151 cri.go:89] found id: ""
	I1217 10:48:00.990722 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.990729 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:00.990737 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:00.990747 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:01.012006 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:01.012023 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:01.099675 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:01.089770   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.090990   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.092688   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.093302   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.095197   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:01.089770   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.090990   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.092688   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.093302   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.095197   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:01.099686 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:01.099702 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:01.164360 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:01.164381 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:01.194518 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:01.194535 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:03.752593 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:03.763233 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:03.763297 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:03.788873 2974151 cri.go:89] found id: ""
	I1217 10:48:03.788893 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.788901 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:03.788907 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:03.788968 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:03.818571 2974151 cri.go:89] found id: ""
	I1217 10:48:03.818586 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.818593 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:03.818598 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:03.818657 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:03.844383 2974151 cri.go:89] found id: ""
	I1217 10:48:03.844397 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.844405 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:03.844410 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:03.844496 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:03.869318 2974151 cri.go:89] found id: ""
	I1217 10:48:03.869333 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.869339 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:03.869345 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:03.869404 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:03.895029 2974151 cri.go:89] found id: ""
	I1217 10:48:03.895043 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.895050 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:03.895055 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:03.895113 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:03.920493 2974151 cri.go:89] found id: ""
	I1217 10:48:03.920509 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.920516 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:03.920522 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:03.920592 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:03.945885 2974151 cri.go:89] found id: ""
	I1217 10:48:03.945898 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.945905 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:03.945912 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:03.945922 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:04.003008 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:04.003033 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:04.026399 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:04.026416 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:04.107334 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:04.098549   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.099321   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.101190   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.101779   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.103317   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:04.098549   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.099321   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.101190   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.101779   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.103317   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:04.107349 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:04.107360 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:04.174915 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:04.174940 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:06.707611 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:06.718250 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:06.718313 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:06.743084 2974151 cri.go:89] found id: ""
	I1217 10:48:06.743098 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.743105 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:06.743110 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:06.743169 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:06.769923 2974151 cri.go:89] found id: ""
	I1217 10:48:06.769937 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.769945 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:06.769950 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:06.770016 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:06.798634 2974151 cri.go:89] found id: ""
	I1217 10:48:06.798648 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.798655 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:06.798660 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:06.798719 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:06.823901 2974151 cri.go:89] found id: ""
	I1217 10:48:06.823915 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.823923 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:06.823928 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:06.823990 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:06.849872 2974151 cri.go:89] found id: ""
	I1217 10:48:06.849885 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.849892 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:06.849898 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:06.849957 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:06.875558 2974151 cri.go:89] found id: ""
	I1217 10:48:06.875572 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.875580 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:06.875585 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:06.875642 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:06.901051 2974151 cri.go:89] found id: ""
	I1217 10:48:06.901065 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.901071 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:06.901079 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:06.901088 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:06.964468 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:06.964488 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:06.993527 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:06.993542 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:07.062199 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:07.062218 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:07.082316 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:07.082334 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:07.157387 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:07.148299   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.149171   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.150892   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.151645   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.153293   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:07.148299   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.149171   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.150892   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.151645   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.153293   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:09.657640 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:09.667724 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:09.667783 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:09.693919 2974151 cri.go:89] found id: ""
	I1217 10:48:09.693935 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.693941 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:09.693948 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:09.694008 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:09.722743 2974151 cri.go:89] found id: ""
	I1217 10:48:09.722758 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.722765 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:09.722770 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:09.722828 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:09.756610 2974151 cri.go:89] found id: ""
	I1217 10:48:09.756624 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.756632 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:09.756637 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:09.756693 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:09.786006 2974151 cri.go:89] found id: ""
	I1217 10:48:09.786021 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.786028 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:09.786033 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:09.786097 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:09.810865 2974151 cri.go:89] found id: ""
	I1217 10:48:09.810878 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.810885 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:09.810890 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:09.810947 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:09.838221 2974151 cri.go:89] found id: ""
	I1217 10:48:09.838235 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.838242 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:09.838247 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:09.838307 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:09.866748 2974151 cri.go:89] found id: ""
	I1217 10:48:09.866762 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.866769 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:09.866776 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:09.866786 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:09.929554 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:09.929576 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:09.959017 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:09.959032 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:10.017246 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:10.017265 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:10.036170 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:10.036188 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:10.112138 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:10.102458   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.103256   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.105102   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.105527   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.107946   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:10.102458   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.103256   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.105102   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.105527   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.107946   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:12.612434 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:12.622568 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:12.622628 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:12.650041 2974151 cri.go:89] found id: ""
	I1217 10:48:12.650061 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.650069 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:12.650074 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:12.650134 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:12.674422 2974151 cri.go:89] found id: ""
	I1217 10:48:12.674437 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.674444 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:12.674450 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:12.674509 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:12.703294 2974151 cri.go:89] found id: ""
	I1217 10:48:12.703308 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.703315 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:12.703320 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:12.703378 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:12.727986 2974151 cri.go:89] found id: ""
	I1217 10:48:12.728006 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.728013 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:12.728019 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:12.728078 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:12.753787 2974151 cri.go:89] found id: ""
	I1217 10:48:12.753800 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.753807 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:12.753812 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:12.753869 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:12.779807 2974151 cri.go:89] found id: ""
	I1217 10:48:12.779831 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.779838 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:12.779844 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:12.779904 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:12.806196 2974151 cri.go:89] found id: ""
	I1217 10:48:12.806211 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.806219 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:12.806227 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:12.806237 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:12.862792 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:12.862812 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:12.879906 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:12.879923 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:12.944306 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:12.935386   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.935978   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.937685   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.938348   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.940016   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:12.935386   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.935978   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.937685   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.938348   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.940016   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:12.944316 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:12.944327 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:13.006787 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:13.006812 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:15.546753 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:15.557080 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:15.557147 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:15.582296 2974151 cri.go:89] found id: ""
	I1217 10:48:15.582309 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.582316 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:15.582321 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:15.582378 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:15.609992 2974151 cri.go:89] found id: ""
	I1217 10:48:15.610006 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.610013 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:15.610018 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:15.610075 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:15.635702 2974151 cri.go:89] found id: ""
	I1217 10:48:15.635716 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.635723 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:15.635728 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:15.635788 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:15.661568 2974151 cri.go:89] found id: ""
	I1217 10:48:15.661582 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.661589 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:15.661595 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:15.661652 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:15.691028 2974151 cri.go:89] found id: ""
	I1217 10:48:15.691042 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.691049 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:15.691056 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:15.691114 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:15.715986 2974151 cri.go:89] found id: ""
	I1217 10:48:15.716009 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.716018 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:15.716023 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:15.716088 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:15.742377 2974151 cri.go:89] found id: ""
	I1217 10:48:15.742391 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.742398 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:15.742406 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:15.742417 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:15.759230 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:15.759248 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:15.824478 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:15.816058   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.816539   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.818127   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.818799   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.820350   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:15.816058   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.816539   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.818127   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.818799   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.820350   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:15.824490 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:15.824502 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:15.892784 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:15.892804 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:15.921547 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:15.921562 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:18.478009 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:18.488179 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:18.488242 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:18.511813 2974151 cri.go:89] found id: ""
	I1217 10:48:18.511827 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.511843 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:18.511850 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:18.511929 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:18.535876 2974151 cri.go:89] found id: ""
	I1217 10:48:18.535890 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.535897 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:18.535902 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:18.535957 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:18.560498 2974151 cri.go:89] found id: ""
	I1217 10:48:18.560512 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.560521 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:18.560526 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:18.560588 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:18.585005 2974151 cri.go:89] found id: ""
	I1217 10:48:18.585018 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.585025 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:18.585030 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:18.585087 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:18.609132 2974151 cri.go:89] found id: ""
	I1217 10:48:18.609146 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.609153 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:18.609158 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:18.609215 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:18.640158 2974151 cri.go:89] found id: ""
	I1217 10:48:18.640172 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.640187 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:18.640194 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:18.640266 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:18.669845 2974151 cri.go:89] found id: ""
	I1217 10:48:18.669860 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.669867 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:18.669874 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:18.669884 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:18.726133 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:18.726154 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:18.743323 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:18.743341 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:18.807202 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:18.798544   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.799121   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.800756   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.801823   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.803369   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:18.798544   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.799121   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.800756   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.801823   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.803369   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:18.807212 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:18.807222 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:18.869437 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:18.869456 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:21.398466 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:21.408899 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:21.408973 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:21.433836 2974151 cri.go:89] found id: ""
	I1217 10:48:21.433851 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.433858 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:21.433863 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:21.433925 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:21.458440 2974151 cri.go:89] found id: ""
	I1217 10:48:21.458455 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.458462 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:21.458473 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:21.458531 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:21.482664 2974151 cri.go:89] found id: ""
	I1217 10:48:21.482678 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.482685 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:21.482690 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:21.482747 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:21.510499 2974151 cri.go:89] found id: ""
	I1217 10:48:21.510513 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.510520 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:21.510525 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:21.510583 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:21.541182 2974151 cri.go:89] found id: ""
	I1217 10:48:21.541196 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.541204 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:21.541210 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:21.541268 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:21.565692 2974151 cri.go:89] found id: ""
	I1217 10:48:21.565705 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.565717 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:21.565723 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:21.565781 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:21.589704 2974151 cri.go:89] found id: ""
	I1217 10:48:21.589718 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.589725 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:21.589733 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:21.589743 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:21.651127 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:21.642467   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.643175   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.644846   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.645439   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.647213   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:21.642467   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.643175   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.644846   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.645439   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.647213   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:21.651137 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:21.651153 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:21.714087 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:21.714110 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:21.743190 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:21.743205 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:21.803426 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:21.803446 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:24.321453 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:24.331883 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:24.331948 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:24.356312 2974151 cri.go:89] found id: ""
	I1217 10:48:24.356327 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.356334 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:24.356340 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:24.356398 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:24.382382 2974151 cri.go:89] found id: ""
	I1217 10:48:24.382395 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.382402 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:24.382407 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:24.382466 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:24.410304 2974151 cri.go:89] found id: ""
	I1217 10:48:24.410318 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.410325 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:24.410330 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:24.410387 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:24.434459 2974151 cri.go:89] found id: ""
	I1217 10:48:24.434474 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.434481 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:24.434486 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:24.434551 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:24.459866 2974151 cri.go:89] found id: ""
	I1217 10:48:24.459881 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.459888 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:24.459893 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:24.459989 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:24.486458 2974151 cri.go:89] found id: ""
	I1217 10:48:24.486471 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.486478 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:24.486484 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:24.486548 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:24.511349 2974151 cri.go:89] found id: ""
	I1217 10:48:24.511363 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.511372 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:24.511379 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:24.511390 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:24.575296 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:24.566670   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.567433   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.569106   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.569660   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.571348   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:24.566670   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.567433   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.569106   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.569660   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.571348   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:24.575314 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:24.575325 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:24.637043 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:24.637063 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:24.665459 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:24.665475 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:24.722699 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:24.722722 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:27.240739 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:27.252359 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:27.252432 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:27.280163 2974151 cri.go:89] found id: ""
	I1217 10:48:27.280177 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.280196 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:27.280201 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:27.280266 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:27.309589 2974151 cri.go:89] found id: ""
	I1217 10:48:27.309603 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.309622 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:27.309627 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:27.309692 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:27.337538 2974151 cri.go:89] found id: ""
	I1217 10:48:27.337552 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.337559 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:27.337564 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:27.337622 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:27.361942 2974151 cri.go:89] found id: ""
	I1217 10:48:27.361957 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.361965 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:27.361970 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:27.362029 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:27.390818 2974151 cri.go:89] found id: ""
	I1217 10:48:27.390832 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.390840 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:27.390845 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:27.390908 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:27.422856 2974151 cri.go:89] found id: ""
	I1217 10:48:27.422871 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.422878 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:27.422883 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:27.422943 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:27.448978 2974151 cri.go:89] found id: ""
	I1217 10:48:27.448992 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.448999 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:27.449007 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:27.449016 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:27.504505 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:27.504523 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:27.521306 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:27.521327 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:27.585173 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:27.576673   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.577398   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.579125   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.579750   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.581292   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:27.576673   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.577398   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.579125   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.579750   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.581292   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:27.585182 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:27.585193 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:27.646817 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:27.646836 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:30.175129 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:30.186313 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:30.186377 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:30.213450 2974151 cri.go:89] found id: ""
	I1217 10:48:30.213464 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.213471 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:30.213476 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:30.213541 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:30.240025 2974151 cri.go:89] found id: ""
	I1217 10:48:30.240039 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.240046 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:30.240051 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:30.240126 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:30.286752 2974151 cri.go:89] found id: ""
	I1217 10:48:30.286766 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.286774 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:30.286779 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:30.286858 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:30.317210 2974151 cri.go:89] found id: ""
	I1217 10:48:30.317232 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.317240 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:30.317245 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:30.317305 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:30.345461 2974151 cri.go:89] found id: ""
	I1217 10:48:30.345475 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.345482 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:30.345487 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:30.345546 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:30.375558 2974151 cri.go:89] found id: ""
	I1217 10:48:30.375576 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.375590 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:30.375595 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:30.375655 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:30.401652 2974151 cri.go:89] found id: ""
	I1217 10:48:30.401668 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.401675 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:30.401683 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:30.401693 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:30.462370 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:30.462393 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:30.480350 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:30.480366 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:30.545595 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:30.536885   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.537607   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.539274   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.539750   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.541271   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:30.536885   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.537607   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.539274   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.539750   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.541271   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:30.545607 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:30.545619 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:30.609333 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:30.609353 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:33.138648 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:33.149215 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:33.149282 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:33.174735 2974151 cri.go:89] found id: ""
	I1217 10:48:33.174755 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.174764 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:33.174769 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:33.174832 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:33.200480 2974151 cri.go:89] found id: ""
	I1217 10:48:33.200495 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.200502 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:33.200507 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:33.200567 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:33.230102 2974151 cri.go:89] found id: ""
	I1217 10:48:33.230117 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.230124 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:33.230129 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:33.230186 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:33.273250 2974151 cri.go:89] found id: ""
	I1217 10:48:33.273264 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.273271 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:33.273278 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:33.273336 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:33.304262 2974151 cri.go:89] found id: ""
	I1217 10:48:33.304276 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.304293 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:33.304299 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:33.304359 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:33.332160 2974151 cri.go:89] found id: ""
	I1217 10:48:33.332174 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.332181 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:33.332186 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:33.332247 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:33.357270 2974151 cri.go:89] found id: ""
	I1217 10:48:33.357284 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.357291 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:33.357299 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:33.357308 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:33.420730 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:33.420751 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:33.448992 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:33.449007 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:33.504960 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:33.504979 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:33.521896 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:33.521913 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:33.584222 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:33.575275   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.576061   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.577717   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.578259   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.580086   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:33.575275   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.576061   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.577717   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.578259   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.580086   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:36.084525 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:36.095613 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:36.095678 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:36.121922 2974151 cri.go:89] found id: ""
	I1217 10:48:36.121936 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.121944 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:36.121950 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:36.122009 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:36.150594 2974151 cri.go:89] found id: ""
	I1217 10:48:36.150608 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.150616 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:36.150621 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:36.150682 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:36.179197 2974151 cri.go:89] found id: ""
	I1217 10:48:36.179210 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.179218 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:36.179223 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:36.179283 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:36.203527 2974151 cri.go:89] found id: ""
	I1217 10:48:36.203541 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.203548 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:36.203553 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:36.203620 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:36.228332 2974151 cri.go:89] found id: ""
	I1217 10:48:36.228345 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.228352 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:36.228358 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:36.228456 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:36.262749 2974151 cri.go:89] found id: ""
	I1217 10:48:36.262763 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.262769 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:36.262774 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:36.262834 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:36.300340 2974151 cri.go:89] found id: ""
	I1217 10:48:36.300353 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.300363 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:36.300371 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:36.300380 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:36.358709 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:36.358729 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:36.375631 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:36.375649 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:36.440551 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:36.432145   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.432737   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.434406   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.434949   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.436697   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:36.432145   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.432737   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.434406   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.434949   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.436697   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:36.440560 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:36.440571 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:36.502941 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:36.502960 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:39.031727 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:39.042285 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:39.042350 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:39.068264 2974151 cri.go:89] found id: ""
	I1217 10:48:39.068278 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.068285 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:39.068291 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:39.068352 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:39.091732 2974151 cri.go:89] found id: ""
	I1217 10:48:39.091745 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.091752 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:39.091757 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:39.091815 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:39.118106 2974151 cri.go:89] found id: ""
	I1217 10:48:39.118119 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.118126 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:39.118133 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:39.118189 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:39.146834 2974151 cri.go:89] found id: ""
	I1217 10:48:39.146848 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.146856 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:39.146861 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:39.146919 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:39.175980 2974151 cri.go:89] found id: ""
	I1217 10:48:39.175994 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.176001 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:39.176006 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:39.176069 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:39.201501 2974151 cri.go:89] found id: ""
	I1217 10:48:39.201515 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.201522 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:39.201527 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:39.201582 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:39.226802 2974151 cri.go:89] found id: ""
	I1217 10:48:39.226816 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.226833 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:39.226841 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:39.226852 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:39.283913 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:39.283931 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:39.304511 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:39.304528 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:39.377031 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:39.368579   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.369282   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.370783   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.371295   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.372809   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:39.368579   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.369282   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.370783   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.371295   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.372809   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:39.377044 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:39.377059 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:39.440871 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:39.440891 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:41.970682 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:41.981109 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:41.981168 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:42.014790 2974151 cri.go:89] found id: ""
	I1217 10:48:42.014806 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.014813 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:42.014820 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:42.014890 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:42.044163 2974151 cri.go:89] found id: ""
	I1217 10:48:42.044177 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.044183 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:42.044188 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:42.044247 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:42.074548 2974151 cri.go:89] found id: ""
	I1217 10:48:42.074581 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.074595 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:42.074605 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:42.074707 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:42.108730 2974151 cri.go:89] found id: ""
	I1217 10:48:42.108755 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.108763 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:42.108769 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:42.108838 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:42.140974 2974151 cri.go:89] found id: ""
	I1217 10:48:42.140989 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.140997 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:42.141002 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:42.141075 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:42.185841 2974151 cri.go:89] found id: ""
	I1217 10:48:42.185857 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.185865 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:42.185871 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:42.185940 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:42.227621 2974151 cri.go:89] found id: ""
	I1217 10:48:42.227637 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.227645 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:42.227654 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:42.227664 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:42.293458 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:42.293479 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:42.316925 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:42.316945 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:42.388580 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:42.379787   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.380216   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.381959   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.382335   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.383960   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:42.379787   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.380216   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.381959   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.382335   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.383960   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:42.388600 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:42.388612 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:42.451727 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:42.451749 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:44.984590 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:44.995270 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:44.995356 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:45.061848 2974151 cri.go:89] found id: ""
	I1217 10:48:45.061864 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.061871 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:45.061878 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:45.061944 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:45.120144 2974151 cri.go:89] found id: ""
	I1217 10:48:45.120160 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.120168 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:45.120174 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:45.120245 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:45.160210 2974151 cri.go:89] found id: ""
	I1217 10:48:45.160226 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.160235 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:45.160240 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:45.160314 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:45.221796 2974151 cri.go:89] found id: ""
	I1217 10:48:45.221829 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.221858 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:45.221880 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:45.222024 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:45.304676 2974151 cri.go:89] found id: ""
	I1217 10:48:45.304703 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.304711 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:45.304717 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:45.304788 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:45.337767 2974151 cri.go:89] found id: ""
	I1217 10:48:45.337790 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.337798 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:45.337804 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:45.337871 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:45.373372 2974151 cri.go:89] found id: ""
	I1217 10:48:45.373387 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.373394 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:45.373402 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:45.373412 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:45.433269 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:45.433288 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:45.450287 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:45.450304 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:45.517643 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:45.508647   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.509270   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.511035   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.511639   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.513218   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:45.508647   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.509270   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.511035   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.511639   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.513218   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:45.517653 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:45.517665 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:45.581750 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:45.581771 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:48.117070 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:48.128197 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:48.128258 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:48.153434 2974151 cri.go:89] found id: ""
	I1217 10:48:48.153449 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.153455 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:48.153461 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:48.153520 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:48.178677 2974151 cri.go:89] found id: ""
	I1217 10:48:48.178691 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.178698 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:48.178703 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:48.178766 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:48.206864 2974151 cri.go:89] found id: ""
	I1217 10:48:48.206879 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.206886 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:48.206891 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:48.206957 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:48.231924 2974151 cri.go:89] found id: ""
	I1217 10:48:48.231938 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.231945 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:48.231950 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:48.232008 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:48.274705 2974151 cri.go:89] found id: ""
	I1217 10:48:48.274718 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.274726 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:48.274731 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:48.274790 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:48.303863 2974151 cri.go:89] found id: ""
	I1217 10:48:48.303877 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.303884 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:48.303889 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:48.303950 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:48.328838 2974151 cri.go:89] found id: ""
	I1217 10:48:48.328852 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.328859 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:48.328867 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:48.328878 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:48.389442 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:48.389462 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:48.406684 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:48.406700 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:48.472922 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:48.463986   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.464483   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.466011   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.466510   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.468051   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:48.463986   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.464483   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.466011   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.466510   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.468051   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:48.472932 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:48.472943 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:48.535655 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:48.535674 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:51.069071 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:51.081466 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:51.081531 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:51.111124 2974151 cri.go:89] found id: ""
	I1217 10:48:51.111139 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.111146 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:51.111152 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:51.111218 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:51.143791 2974151 cri.go:89] found id: ""
	I1217 10:48:51.143806 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.143813 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:51.143818 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:51.143881 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:51.169640 2974151 cri.go:89] found id: ""
	I1217 10:48:51.169655 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.169661 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:51.169666 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:51.169726 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:51.195027 2974151 cri.go:89] found id: ""
	I1217 10:48:51.195041 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.195048 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:51.195053 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:51.195115 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:51.219317 2974151 cri.go:89] found id: ""
	I1217 10:48:51.219330 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.219337 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:51.219342 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:51.219401 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:51.246522 2974151 cri.go:89] found id: ""
	I1217 10:48:51.246536 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.246543 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:51.246548 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:51.246606 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:51.277021 2974151 cri.go:89] found id: ""
	I1217 10:48:51.277047 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.277055 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:51.277064 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:51.277074 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:51.345341 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:51.345364 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:51.378677 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:51.378693 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:51.438850 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:51.438869 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:51.455900 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:51.455916 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:51.516892 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:51.508779   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.509483   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.510624   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.511147   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.512798   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:51.508779   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.509483   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.510624   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.511147   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.512798   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:54.017193 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:54.028476 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:54.028544 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:54.056997 2974151 cri.go:89] found id: ""
	I1217 10:48:54.057012 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.057019 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:54.057025 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:54.057086 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:54.083159 2974151 cri.go:89] found id: ""
	I1217 10:48:54.083175 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.083183 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:54.083189 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:54.083251 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:54.109519 2974151 cri.go:89] found id: ""
	I1217 10:48:54.109534 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.109549 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:54.109557 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:54.109624 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:54.134157 2974151 cri.go:89] found id: ""
	I1217 10:48:54.134171 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.134178 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:54.134183 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:54.134239 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:54.162788 2974151 cri.go:89] found id: ""
	I1217 10:48:54.162802 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.162819 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:54.162825 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:54.162894 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:54.189731 2974151 cri.go:89] found id: ""
	I1217 10:48:54.189749 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.189756 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:54.189762 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:54.189850 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:54.214954 2974151 cri.go:89] found id: ""
	I1217 10:48:54.214968 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.214975 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:54.214982 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:54.214992 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:54.232128 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:54.232145 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:54.332775 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:54.323643   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.324176   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.325741   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.326329   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.328065   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:54.323643   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.324176   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.325741   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.326329   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.328065   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:54.332784 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:54.332794 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:54.400873 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:54.400902 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:54.436837 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:54.436855 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:56.995650 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:57.014000 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:57.014068 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:57.039621 2974151 cri.go:89] found id: ""
	I1217 10:48:57.039635 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.039642 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:57.039647 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:57.039706 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:57.063811 2974151 cri.go:89] found id: ""
	I1217 10:48:57.063824 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.063832 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:57.063837 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:57.063901 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:57.089763 2974151 cri.go:89] found id: ""
	I1217 10:48:57.089777 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.089784 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:57.089789 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:57.089849 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:57.119137 2974151 cri.go:89] found id: ""
	I1217 10:48:57.119151 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.119157 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:57.119163 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:57.119222 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:57.145301 2974151 cri.go:89] found id: ""
	I1217 10:48:57.145317 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.145324 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:57.145330 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:57.145390 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:57.169967 2974151 cri.go:89] found id: ""
	I1217 10:48:57.169981 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.169989 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:57.169994 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:57.170055 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:57.199678 2974151 cri.go:89] found id: ""
	I1217 10:48:57.199693 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.199700 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:57.199708 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:57.199718 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:57.259994 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:57.260013 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:57.283244 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:57.283262 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:57.355664 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:57.347248   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.348013   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.349816   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.350323   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.351848   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:57.347248   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.348013   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.349816   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.350323   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.351848   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:57.355675 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:57.355686 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:57.418570 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:57.418593 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:59.953153 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:59.963676 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:59.963736 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:59.989636 2974151 cri.go:89] found id: ""
	I1217 10:48:59.989654 2974151 logs.go:282] 0 containers: []
	W1217 10:48:59.989662 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:59.989667 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:59.989734 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:00.158254 2974151 cri.go:89] found id: ""
	I1217 10:49:00.158276 2974151 logs.go:282] 0 containers: []
	W1217 10:49:00.158284 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:00.158290 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:00.158371 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:00.272664 2974151 cri.go:89] found id: ""
	I1217 10:49:00.272680 2974151 logs.go:282] 0 containers: []
	W1217 10:49:00.272687 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:00.272693 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:00.272790 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:00.329030 2974151 cri.go:89] found id: ""
	I1217 10:49:00.329045 2974151 logs.go:282] 0 containers: []
	W1217 10:49:00.329052 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:00.329058 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:00.329123 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:00.376045 2974151 cri.go:89] found id: ""
	I1217 10:49:00.376060 2974151 logs.go:282] 0 containers: []
	W1217 10:49:00.376068 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:00.376074 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:00.376141 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:00.406187 2974151 cri.go:89] found id: ""
	I1217 10:49:00.406202 2974151 logs.go:282] 0 containers: []
	W1217 10:49:00.406210 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:00.406216 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:00.406281 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:00.436523 2974151 cri.go:89] found id: ""
	I1217 10:49:00.436538 2974151 logs.go:282] 0 containers: []
	W1217 10:49:00.436546 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:00.436554 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:00.436575 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:00.504375 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:00.495726   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.496591   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.498206   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.498541   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.500005   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:00.495726   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.496591   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.498206   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.498541   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.500005   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:00.504450 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:00.504460 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:00.568543 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:00.568563 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:00.600756 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:00.600773 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:00.662114 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:00.662131 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:03.181138 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:03.191733 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:03.191796 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:03.220693 2974151 cri.go:89] found id: ""
	I1217 10:49:03.220707 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.220714 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:03.220719 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:03.220775 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:03.245346 2974151 cri.go:89] found id: ""
	I1217 10:49:03.245359 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.245366 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:03.245371 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:03.245434 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:03.283019 2974151 cri.go:89] found id: ""
	I1217 10:49:03.283034 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.283042 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:03.283072 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:03.283134 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:03.312584 2974151 cri.go:89] found id: ""
	I1217 10:49:03.312599 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.312605 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:03.312611 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:03.312670 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:03.337325 2974151 cri.go:89] found id: ""
	I1217 10:49:03.337340 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.337347 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:03.337352 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:03.337421 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:03.363072 2974151 cri.go:89] found id: ""
	I1217 10:49:03.363086 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.363093 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:03.363099 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:03.363156 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:03.388307 2974151 cri.go:89] found id: ""
	I1217 10:49:03.388321 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.388328 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:03.388336 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:03.388346 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:03.450591 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:03.450611 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:03.479831 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:03.479848 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:03.538921 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:03.538940 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:03.557193 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:03.557210 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:03.629818 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:03.620815   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.621960   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.622408   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.623908   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.624403   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:03.620815   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.621960   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.622408   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.623908   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.624403   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:06.130079 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:06.140562 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:06.140625 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:06.176078 2974151 cri.go:89] found id: ""
	I1217 10:49:06.176092 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.176100 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:06.176106 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:06.176165 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:06.201648 2974151 cri.go:89] found id: ""
	I1217 10:49:06.201669 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.201678 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:06.201683 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:06.201741 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:06.225531 2974151 cri.go:89] found id: ""
	I1217 10:49:06.225545 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.225552 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:06.225557 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:06.225615 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:06.252027 2974151 cri.go:89] found id: ""
	I1217 10:49:06.252042 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.252049 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:06.252056 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:06.252118 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:06.280340 2974151 cri.go:89] found id: ""
	I1217 10:49:06.280353 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.280361 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:06.280366 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:06.280449 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:06.313759 2974151 cri.go:89] found id: ""
	I1217 10:49:06.313773 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.313781 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:06.313786 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:06.313846 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:06.338616 2974151 cri.go:89] found id: ""
	I1217 10:49:06.338630 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.338638 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:06.338645 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:06.338655 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:06.394759 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:06.394784 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:06.412192 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:06.412208 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:06.475020 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:06.466865   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.467591   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.469274   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.469719   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.471184   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:06.466865   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.467591   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.469274   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.469719   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.471184   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:06.475030 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:06.475039 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:06.537503 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:06.537522 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:09.067381 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:09.078169 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:09.078242 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:09.102188 2974151 cri.go:89] found id: ""
	I1217 10:49:09.102202 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.102210 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:09.102215 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:09.102276 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:09.127428 2974151 cri.go:89] found id: ""
	I1217 10:49:09.127443 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.127457 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:09.127462 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:09.127523 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:09.155928 2974151 cri.go:89] found id: ""
	I1217 10:49:09.155943 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.155951 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:09.155956 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:09.156013 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:09.180962 2974151 cri.go:89] found id: ""
	I1217 10:49:09.180976 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.180983 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:09.180988 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:09.181047 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:09.206446 2974151 cri.go:89] found id: ""
	I1217 10:49:09.206459 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.206466 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:09.206471 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:09.206527 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:09.234163 2974151 cri.go:89] found id: ""
	I1217 10:49:09.234177 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.234184 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:09.234191 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:09.234248 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:09.266062 2974151 cri.go:89] found id: ""
	I1217 10:49:09.266076 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.266083 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:09.266091 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:09.266100 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:09.331047 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:09.331068 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:09.348066 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:09.348082 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:09.416466 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:09.408138   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.408821   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.410542   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.410884   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.412400   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:09.408138   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.408821   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.410542   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.410884   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.412400   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:09.416475 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:09.416488 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:09.477634 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:09.477656 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:12.006559 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:12.017999 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:12.018064 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:12.043667 2974151 cri.go:89] found id: ""
	I1217 10:49:12.043681 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.043689 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:12.043694 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:12.043755 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:12.067975 2974151 cri.go:89] found id: ""
	I1217 10:49:12.068000 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.068008 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:12.068013 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:12.068082 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:12.093913 2974151 cri.go:89] found id: ""
	I1217 10:49:12.093936 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.093944 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:12.093950 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:12.094011 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:12.123009 2974151 cri.go:89] found id: ""
	I1217 10:49:12.123022 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.123029 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:12.123046 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:12.123121 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:12.152263 2974151 cri.go:89] found id: ""
	I1217 10:49:12.152277 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.152284 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:12.152299 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:12.152357 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:12.178500 2974151 cri.go:89] found id: ""
	I1217 10:49:12.178514 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.178521 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:12.178527 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:12.178601 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:12.203660 2974151 cri.go:89] found id: ""
	I1217 10:49:12.203674 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.203692 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:12.203700 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:12.203711 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:12.261019 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:12.261039 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:12.279774 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:12.279790 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:12.350172 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:12.342156   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.342650   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.344118   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.344659   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.346217   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:12.342156   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.342650   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.344118   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.344659   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.346217   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:12.350182 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:12.350192 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:12.412715 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:12.412734 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:14.942372 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:14.953073 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:14.953133 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:14.986889 2974151 cri.go:89] found id: ""
	I1217 10:49:14.986903 2974151 logs.go:282] 0 containers: []
	W1217 10:49:14.986910 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:14.986916 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:14.987012 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:15.024956 2974151 cri.go:89] found id: ""
	I1217 10:49:15.024972 2974151 logs.go:282] 0 containers: []
	W1217 10:49:15.024980 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:15.024986 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:15.025062 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:15.055135 2974151 cri.go:89] found id: ""
	I1217 10:49:15.055159 2974151 logs.go:282] 0 containers: []
	W1217 10:49:15.055170 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:15.055175 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:15.055244 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:15.083268 2974151 cri.go:89] found id: ""
	I1217 10:49:15.083283 2974151 logs.go:282] 0 containers: []
	W1217 10:49:15.083310 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:15.083316 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:15.083386 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:15.110734 2974151 cri.go:89] found id: ""
	I1217 10:49:15.110750 2974151 logs.go:282] 0 containers: []
	W1217 10:49:15.110757 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:15.110764 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:15.110825 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:15.140854 2974151 cri.go:89] found id: ""
	I1217 10:49:15.140869 2974151 logs.go:282] 0 containers: []
	W1217 10:49:15.140876 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:15.140881 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:15.140981 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:15.167259 2974151 cri.go:89] found id: ""
	I1217 10:49:15.167273 2974151 logs.go:282] 0 containers: []
	W1217 10:49:15.167280 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:15.167288 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:15.167298 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:15.224081 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:15.224100 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:15.241661 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:15.241679 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:15.322485 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:15.313320   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.313943   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.316017   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.316658   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.318128   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:15.313320   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.313943   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.316017   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.316658   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.318128   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:15.322495 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:15.322517 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:15.385975 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:15.385996 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:17.915565 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:17.925558 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:17.925619 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:17.950881 2974151 cri.go:89] found id: ""
	I1217 10:49:17.950895 2974151 logs.go:282] 0 containers: []
	W1217 10:49:17.950902 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:17.950907 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:17.950964 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:17.975955 2974151 cri.go:89] found id: ""
	I1217 10:49:17.975969 2974151 logs.go:282] 0 containers: []
	W1217 10:49:17.975975 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:17.975980 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:17.976039 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:18.004484 2974151 cri.go:89] found id: ""
	I1217 10:49:18.004503 2974151 logs.go:282] 0 containers: []
	W1217 10:49:18.004512 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:18.004517 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:18.004597 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:18.031679 2974151 cri.go:89] found id: ""
	I1217 10:49:18.031694 2974151 logs.go:282] 0 containers: []
	W1217 10:49:18.031702 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:18.031708 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:18.031775 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:18.059398 2974151 cri.go:89] found id: ""
	I1217 10:49:18.059412 2974151 logs.go:282] 0 containers: []
	W1217 10:49:18.059436 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:18.059443 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:18.059504 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:18.085330 2974151 cri.go:89] found id: ""
	I1217 10:49:18.085344 2974151 logs.go:282] 0 containers: []
	W1217 10:49:18.085352 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:18.085357 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:18.085420 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:18.114569 2974151 cri.go:89] found id: ""
	I1217 10:49:18.114585 2974151 logs.go:282] 0 containers: []
	W1217 10:49:18.114592 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:18.114600 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:18.114611 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:18.178110 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:18.169772   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.170633   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.172208   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.172731   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.174231   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:18.169772   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.170633   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.172208   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.172731   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.174231   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:18.178122 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:18.178132 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:18.241410 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:18.241434 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:18.273882 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:18.273898 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:18.334306 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:18.334324 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:20.852121 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:20.862188 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:20.862248 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:20.886819 2974151 cri.go:89] found id: ""
	I1217 10:49:20.886834 2974151 logs.go:282] 0 containers: []
	W1217 10:49:20.886850 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:20.886857 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:20.886930 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:20.913071 2974151 cri.go:89] found id: ""
	I1217 10:49:20.913086 2974151 logs.go:282] 0 containers: []
	W1217 10:49:20.913093 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:20.913098 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:20.913157 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:20.937301 2974151 cri.go:89] found id: ""
	I1217 10:49:20.937315 2974151 logs.go:282] 0 containers: []
	W1217 10:49:20.937322 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:20.937327 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:20.937386 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:20.966247 2974151 cri.go:89] found id: ""
	I1217 10:49:20.966260 2974151 logs.go:282] 0 containers: []
	W1217 10:49:20.966267 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:20.966272 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:20.966328 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:20.991713 2974151 cri.go:89] found id: ""
	I1217 10:49:20.991727 2974151 logs.go:282] 0 containers: []
	W1217 10:49:20.991734 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:20.991739 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:20.991796 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:21.017813 2974151 cri.go:89] found id: ""
	I1217 10:49:21.017828 2974151 logs.go:282] 0 containers: []
	W1217 10:49:21.017835 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:21.017841 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:21.017901 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:21.047576 2974151 cri.go:89] found id: ""
	I1217 10:49:21.047590 2974151 logs.go:282] 0 containers: []
	W1217 10:49:21.047598 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:21.047605 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:21.047615 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:21.109681 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:21.109707 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:21.127095 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:21.127114 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:21.192482 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:21.184199   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.184777   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.186485   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.186953   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.188551   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:21.184199   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.184777   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.186485   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.186953   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.188551   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:21.192493 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:21.192504 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:21.256363 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:21.256383 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:23.824987 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:23.835117 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:23.835179 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:23.860953 2974151 cri.go:89] found id: ""
	I1217 10:49:23.860966 2974151 logs.go:282] 0 containers: []
	W1217 10:49:23.860973 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:23.860979 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:23.861036 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:23.894776 2974151 cri.go:89] found id: ""
	I1217 10:49:23.894790 2974151 logs.go:282] 0 containers: []
	W1217 10:49:23.894797 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:23.894802 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:23.894863 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:23.923645 2974151 cri.go:89] found id: ""
	I1217 10:49:23.923660 2974151 logs.go:282] 0 containers: []
	W1217 10:49:23.923667 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:23.923678 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:23.923735 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:23.950354 2974151 cri.go:89] found id: ""
	I1217 10:49:23.950368 2974151 logs.go:282] 0 containers: []
	W1217 10:49:23.950374 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:23.950380 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:23.950437 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:23.974645 2974151 cri.go:89] found id: ""
	I1217 10:49:23.974659 2974151 logs.go:282] 0 containers: []
	W1217 10:49:23.974666 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:23.974671 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:23.974732 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:24.000121 2974151 cri.go:89] found id: ""
	I1217 10:49:24.000149 2974151 logs.go:282] 0 containers: []
	W1217 10:49:24.000157 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:24.000163 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:24.000242 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:24.034475 2974151 cri.go:89] found id: ""
	I1217 10:49:24.034489 2974151 logs.go:282] 0 containers: []
	W1217 10:49:24.034497 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:24.034505 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:24.034514 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:24.099963 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:24.099984 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:24.136430 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:24.136447 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:24.192589 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:24.192651 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:24.209690 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:24.209707 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:24.292778 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:24.284539   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.285387   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.287069   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.287380   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.288843   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:24.284539   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.285387   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.287069   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.287380   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.288843   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:26.793038 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:26.803569 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:26.803630 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:26.829202 2974151 cri.go:89] found id: ""
	I1217 10:49:26.829215 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.829222 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:26.829227 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:26.829285 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:26.855339 2974151 cri.go:89] found id: ""
	I1217 10:49:26.855353 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.855359 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:26.855365 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:26.855434 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:26.882145 2974151 cri.go:89] found id: ""
	I1217 10:49:26.882160 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.882168 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:26.882174 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:26.882231 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:26.906912 2974151 cri.go:89] found id: ""
	I1217 10:49:26.906925 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.906932 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:26.906937 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:26.906994 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:26.931691 2974151 cri.go:89] found id: ""
	I1217 10:49:26.931714 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.931722 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:26.931732 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:26.931798 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:26.957483 2974151 cri.go:89] found id: ""
	I1217 10:49:26.957497 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.957504 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:26.957510 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:26.957570 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:26.981546 2974151 cri.go:89] found id: ""
	I1217 10:49:26.981560 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.981567 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:26.981574 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:26.981584 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:27.038884 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:27.038905 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:27.059063 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:27.059079 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:27.122721 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:27.114006   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.114575   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.116274   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.117079   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.118797   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:27.114006   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.114575   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.116274   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.117079   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.118797   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:27.122731 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:27.122741 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:27.188207 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:27.188227 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:29.720397 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:29.731016 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:29.731089 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:29.759816 2974151 cri.go:89] found id: ""
	I1217 10:49:29.759836 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.759843 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:29.759848 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:29.759909 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:29.784725 2974151 cri.go:89] found id: ""
	I1217 10:49:29.784739 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.784747 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:29.784752 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:29.784813 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:29.810710 2974151 cri.go:89] found id: ""
	I1217 10:49:29.810724 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.810731 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:29.810736 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:29.810796 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:29.835166 2974151 cri.go:89] found id: ""
	I1217 10:49:29.835180 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.835187 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:29.835196 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:29.835255 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:29.862724 2974151 cri.go:89] found id: ""
	I1217 10:49:29.862738 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.862745 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:29.862750 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:29.862814 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:29.887572 2974151 cri.go:89] found id: ""
	I1217 10:49:29.887590 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.887597 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:29.887608 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:29.887676 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:29.911679 2974151 cri.go:89] found id: ""
	I1217 10:49:29.911693 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.911700 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:29.911708 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:29.911717 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:29.974573 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:29.974595 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:30.028175 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:30.028195 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:30.102876 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:30.102898 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:30.120802 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:30.120826 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:30.191763 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:30.183313   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.184024   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.185583   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.186151   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.187552   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:30.183313   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.184024   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.185583   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.186151   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.187552   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:32.692593 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:32.703024 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:32.703087 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:32.733277 2974151 cri.go:89] found id: ""
	I1217 10:49:32.733302 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.733310 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:32.733317 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:32.733384 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:32.763219 2974151 cri.go:89] found id: ""
	I1217 10:49:32.763234 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.763241 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:32.763246 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:32.763304 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:32.793128 2974151 cri.go:89] found id: ""
	I1217 10:49:32.793143 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.793150 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:32.793155 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:32.793213 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:32.824178 2974151 cri.go:89] found id: ""
	I1217 10:49:32.824194 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.824201 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:32.824206 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:32.824271 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:32.854145 2974151 cri.go:89] found id: ""
	I1217 10:49:32.854170 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.854178 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:32.854183 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:32.854251 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:32.879767 2974151 cri.go:89] found id: ""
	I1217 10:49:32.879797 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.879804 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:32.879809 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:32.879899 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:32.909819 2974151 cri.go:89] found id: ""
	I1217 10:49:32.909833 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.909842 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:32.909849 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:32.909859 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:32.938841 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:32.938857 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:32.995133 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:32.995156 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:33.014953 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:33.014974 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:33.085045 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:33.075667   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.076471   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.078226   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.078820   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.080383   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:33.075667   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.076471   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.078226   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.078820   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.080383   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:33.085054 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:33.085065 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:35.651037 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:35.661187 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:35.661246 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:35.687255 2974151 cri.go:89] found id: ""
	I1217 10:49:35.687270 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.687277 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:35.687282 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:35.687340 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:35.713953 2974151 cri.go:89] found id: ""
	I1217 10:49:35.713967 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.713974 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:35.713980 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:35.714040 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:35.742852 2974151 cri.go:89] found id: ""
	I1217 10:49:35.742866 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.742874 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:35.742879 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:35.742937 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:35.768219 2974151 cri.go:89] found id: ""
	I1217 10:49:35.768233 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.768240 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:35.768246 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:35.768314 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:35.792498 2974151 cri.go:89] found id: ""
	I1217 10:49:35.792512 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.792519 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:35.792524 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:35.792583 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:35.818063 2974151 cri.go:89] found id: ""
	I1217 10:49:35.818077 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.818084 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:35.818089 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:35.818147 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:35.843090 2974151 cri.go:89] found id: ""
	I1217 10:49:35.843105 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.843111 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:35.843119 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:35.843129 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:35.899655 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:35.899673 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:35.916834 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:35.916850 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:35.982052 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:35.973406   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.974102   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.975751   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.976284   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.977956   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:35.973406   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.974102   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.975751   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.976284   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.977956   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:35.982062 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:35.982075 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:36.049729 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:36.049750 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:38.582447 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:38.592471 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:38.592528 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:38.617757 2974151 cri.go:89] found id: ""
	I1217 10:49:38.617772 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.617779 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:38.617786 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:38.617845 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:38.647228 2974151 cri.go:89] found id: ""
	I1217 10:49:38.647242 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.647249 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:38.647254 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:38.647312 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:38.672309 2974151 cri.go:89] found id: ""
	I1217 10:49:38.672324 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.672331 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:38.672336 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:38.672395 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:38.699575 2974151 cri.go:89] found id: ""
	I1217 10:49:38.699590 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.699597 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:38.699603 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:38.699660 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:38.729276 2974151 cri.go:89] found id: ""
	I1217 10:49:38.729290 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.729297 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:38.729303 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:38.729361 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:38.757110 2974151 cri.go:89] found id: ""
	I1217 10:49:38.757124 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.757131 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:38.757137 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:38.757197 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:38.783523 2974151 cri.go:89] found id: ""
	I1217 10:49:38.783537 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.783544 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:38.783551 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:38.783562 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:38.854691 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:38.846060   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.846723   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.848354   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.849037   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.850802   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:38.846060   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.846723   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.848354   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.849037   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.850802   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:38.854701 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:38.854713 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:38.918821 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:38.918843 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:38.947201 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:38.947217 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:39.004566 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:39.004587 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:41.522977 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:41.536227 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:41.536288 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:41.566436 2974151 cri.go:89] found id: ""
	I1217 10:49:41.566451 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.566458 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:41.566466 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:41.566527 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:41.599863 2974151 cri.go:89] found id: ""
	I1217 10:49:41.599879 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.599886 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:41.599892 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:41.599956 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:41.631187 2974151 cri.go:89] found id: ""
	I1217 10:49:41.631202 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.631209 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:41.631216 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:41.631274 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:41.658402 2974151 cri.go:89] found id: ""
	I1217 10:49:41.658416 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.658423 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:41.658428 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:41.658487 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:41.686724 2974151 cri.go:89] found id: ""
	I1217 10:49:41.686738 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.686745 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:41.686751 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:41.686809 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:41.721194 2974151 cri.go:89] found id: ""
	I1217 10:49:41.721208 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.721215 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:41.721220 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:41.721279 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:41.750295 2974151 cri.go:89] found id: ""
	I1217 10:49:41.750309 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.750316 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:41.750323 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:41.750334 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:41.779389 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:41.779406 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:41.837692 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:41.837715 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:41.854830 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:41.854847 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:41.919451 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:41.911491   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.912035   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.913552   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.914095   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.915570   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:41.911491   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.912035   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.913552   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.914095   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.915570   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:41.919461 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:41.919470 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:44.482271 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:44.492656 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:44.492720 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:44.530744 2974151 cri.go:89] found id: ""
	I1217 10:49:44.530758 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.530765 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:44.530770 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:44.530831 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:44.556602 2974151 cri.go:89] found id: ""
	I1217 10:49:44.556616 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.556624 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:44.556629 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:44.556687 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:44.582820 2974151 cri.go:89] found id: ""
	I1217 10:49:44.582835 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.582842 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:44.582847 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:44.582906 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:44.607152 2974151 cri.go:89] found id: ""
	I1217 10:49:44.607166 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.607173 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:44.607184 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:44.607244 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:44.634565 2974151 cri.go:89] found id: ""
	I1217 10:49:44.634579 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.634587 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:44.634592 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:44.634662 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:44.661979 2974151 cri.go:89] found id: ""
	I1217 10:49:44.661993 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.662000 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:44.662005 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:44.662066 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:44.686675 2974151 cri.go:89] found id: ""
	I1217 10:49:44.686697 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.686705 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:44.686713 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:44.686722 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:44.743011 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:44.743033 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:44.759816 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:44.759833 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:44.824819 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:44.816544   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.817205   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.818745   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.819310   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.820870   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:44.816544   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.817205   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.818745   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.819310   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.820870   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:44.824830 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:44.824841 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:44.890788 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:44.890807 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:47.418865 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:47.429392 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:47.429467 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:47.454629 2974151 cri.go:89] found id: ""
	I1217 10:49:47.454643 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.454650 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:47.454655 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:47.454766 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:47.480876 2974151 cri.go:89] found id: ""
	I1217 10:49:47.480890 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.480897 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:47.480902 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:47.480970 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:47.512027 2974151 cri.go:89] found id: ""
	I1217 10:49:47.512041 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.512054 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:47.512060 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:47.512120 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:47.539586 2974151 cri.go:89] found id: ""
	I1217 10:49:47.539600 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.539608 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:47.539613 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:47.539671 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:47.566423 2974151 cri.go:89] found id: ""
	I1217 10:49:47.566437 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.566444 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:47.566450 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:47.566507 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:47.592329 2974151 cri.go:89] found id: ""
	I1217 10:49:47.592343 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.592350 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:47.592355 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:47.592442 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:47.617999 2974151 cri.go:89] found id: ""
	I1217 10:49:47.618013 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.618020 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:47.618028 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:47.618037 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:47.678218 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:47.678240 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:47.695642 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:47.695659 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:47.762123 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:47.753095   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.754063   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.755748   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.756187   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.757812   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:47.753095   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.754063   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.755748   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.756187   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.757812   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:47.762133 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:47.762146 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:47.828387 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:47.828408 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:50.363629 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:50.373970 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:50.374026 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:50.398664 2974151 cri.go:89] found id: ""
	I1217 10:49:50.398678 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.398685 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:50.398690 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:50.398749 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:50.424119 2974151 cri.go:89] found id: ""
	I1217 10:49:50.424132 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.424139 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:50.424144 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:50.424203 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:50.450501 2974151 cri.go:89] found id: ""
	I1217 10:49:50.450516 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.450523 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:50.450529 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:50.450591 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:50.479279 2974151 cri.go:89] found id: ""
	I1217 10:49:50.479330 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.479338 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:50.479344 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:50.479402 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:50.514044 2974151 cri.go:89] found id: ""
	I1217 10:49:50.514058 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.514065 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:50.514070 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:50.514147 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:50.550857 2974151 cri.go:89] found id: ""
	I1217 10:49:50.550871 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.550878 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:50.550883 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:50.550943 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:50.586702 2974151 cri.go:89] found id: ""
	I1217 10:49:50.586716 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.586724 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:50.586731 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:50.586740 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:50.649317 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:50.649338 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:50.681689 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:50.681706 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:50.739069 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:50.739092 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:50.756760 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:50.756777 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:50.826240 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:50.816693   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.817339   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.819115   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.819743   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.821406   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:50.816693   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.817339   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.819115   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.819743   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.821406   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:53.327009 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:53.338042 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:53.338105 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:53.364395 2974151 cri.go:89] found id: ""
	I1217 10:49:53.364409 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.364437 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:53.364443 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:53.364504 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:53.391405 2974151 cri.go:89] found id: ""
	I1217 10:49:53.391418 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.391425 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:53.391435 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:53.391495 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:53.415894 2974151 cri.go:89] found id: ""
	I1217 10:49:53.415909 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.415916 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:53.415921 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:53.415987 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:53.441489 2974151 cri.go:89] found id: ""
	I1217 10:49:53.441505 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.441512 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:53.441518 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:53.441577 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:53.470465 2974151 cri.go:89] found id: ""
	I1217 10:49:53.470480 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.470487 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:53.470492 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:53.470580 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:53.496777 2974151 cri.go:89] found id: ""
	I1217 10:49:53.496791 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.496798 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:53.496804 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:53.496862 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:53.522462 2974151 cri.go:89] found id: ""
	I1217 10:49:53.522477 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.522484 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:53.522492 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:53.522503 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:53.587962 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:53.587981 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:53.605021 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:53.605038 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:53.674653 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:53.666595   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.667148   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.668629   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.669055   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.670469   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:53.666595   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.667148   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.668629   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.669055   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.670469   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:53.674671 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:53.674682 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:53.736888 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:53.736908 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:56.264574 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:56.274948 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:56.275019 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:56.306086 2974151 cri.go:89] found id: ""
	I1217 10:49:56.306108 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.306116 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:56.306122 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:56.306189 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:56.331503 2974151 cri.go:89] found id: ""
	I1217 10:49:56.331517 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.331524 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:56.331529 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:56.331588 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:56.357713 2974151 cri.go:89] found id: ""
	I1217 10:49:56.357727 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.357734 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:56.357740 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:56.357804 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:56.386307 2974151 cri.go:89] found id: ""
	I1217 10:49:56.386322 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.386329 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:56.386335 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:56.386392 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:56.411103 2974151 cri.go:89] found id: ""
	I1217 10:49:56.411116 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.411148 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:56.411154 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:56.411210 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:56.438603 2974151 cri.go:89] found id: ""
	I1217 10:49:56.438617 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.438632 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:56.438638 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:56.438700 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:56.463485 2974151 cri.go:89] found id: ""
	I1217 10:49:56.463499 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.463506 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:56.463513 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:56.463526 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:56.480151 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:56.480170 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:56.564122 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:56.555873   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.556612   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.558127   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.558422   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.559904   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:56.555873   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.556612   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.558127   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.558422   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.559904   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:56.564133 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:56.564152 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:56.631606 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:56.631625 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:56.658603 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:56.658621 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:59.216557 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:59.226542 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:59.226605 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:59.250485 2974151 cri.go:89] found id: ""
	I1217 10:49:59.250501 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.250522 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:59.250528 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:59.250597 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:59.275922 2974151 cri.go:89] found id: ""
	I1217 10:49:59.275936 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.275945 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:59.275960 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:59.276021 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:59.305346 2974151 cri.go:89] found id: ""
	I1217 10:49:59.305372 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.305380 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:59.305386 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:59.305454 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:59.329784 2974151 cri.go:89] found id: ""
	I1217 10:49:59.329799 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.329806 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:59.329812 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:59.329870 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:59.353939 2974151 cri.go:89] found id: ""
	I1217 10:49:59.353953 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.353961 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:59.353968 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:59.354030 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:59.379444 2974151 cri.go:89] found id: ""
	I1217 10:49:59.379458 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.379465 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:59.379471 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:59.379535 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:59.404346 2974151 cri.go:89] found id: ""
	I1217 10:49:59.404360 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.404367 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:59.404374 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:59.404385 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:59.421191 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:59.421209 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:59.484153 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:59.476366   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.477052   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.478594   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.478902   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.480341   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:59.476366   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.477052   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.478594   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.478902   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.480341   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:59.484164 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:59.484177 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:59.553474 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:59.553493 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:59.587183 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:59.587199 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:02.144181 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:02.155199 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:02.155292 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:02.188757 2974151 cri.go:89] found id: ""
	I1217 10:50:02.188773 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.188780 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:02.188785 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:02.188851 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:02.219315 2974151 cri.go:89] found id: ""
	I1217 10:50:02.219330 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.219337 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:02.219342 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:02.219406 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:02.244595 2974151 cri.go:89] found id: ""
	I1217 10:50:02.244609 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.244616 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:02.244622 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:02.244684 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:02.270632 2974151 cri.go:89] found id: ""
	I1217 10:50:02.270647 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.270654 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:02.270659 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:02.270718 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:02.296393 2974151 cri.go:89] found id: ""
	I1217 10:50:02.296407 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.296447 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:02.296454 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:02.296521 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:02.326837 2974151 cri.go:89] found id: ""
	I1217 10:50:02.326851 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.326859 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:02.326868 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:02.326931 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:02.356502 2974151 cri.go:89] found id: ""
	I1217 10:50:02.356517 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.356527 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:02.356536 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:02.356548 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:02.434224 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:02.417603   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.418251   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.426024   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.428283   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.428822   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:02.417603   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.418251   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.426024   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.428283   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.428822   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:02.434234 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:02.434244 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:02.502034 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:02.502055 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:02.541286 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:02.541303 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:02.606116 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:02.606137 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:05.125496 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:05.136157 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:05.136217 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:05.160937 2974151 cri.go:89] found id: ""
	I1217 10:50:05.160952 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.160959 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:05.160964 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:05.161024 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:05.185873 2974151 cri.go:89] found id: ""
	I1217 10:50:05.185887 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.185894 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:05.185900 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:05.185999 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:05.212646 2974151 cri.go:89] found id: ""
	I1217 10:50:05.212676 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.212684 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:05.212690 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:05.212767 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:05.238323 2974151 cri.go:89] found id: ""
	I1217 10:50:05.238340 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.238347 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:05.238353 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:05.238414 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:05.263764 2974151 cri.go:89] found id: ""
	I1217 10:50:05.263779 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.263786 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:05.263792 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:05.263849 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:05.289054 2974151 cri.go:89] found id: ""
	I1217 10:50:05.289069 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.289076 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:05.289081 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:05.289144 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:05.314515 2974151 cri.go:89] found id: ""
	I1217 10:50:05.314530 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.314538 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:05.314546 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:05.314556 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:05.380980 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:05.381002 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:05.414207 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:05.414222 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:05.472281 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:05.472301 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:05.489358 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:05.489375 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:05.571554 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:05.562906   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.563808   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.565527   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.566129   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.567151   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:05.562906   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.563808   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.565527   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.566129   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.567151   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:08.071830 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:08.082387 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:08.082462 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:08.110539 2974151 cri.go:89] found id: ""
	I1217 10:50:08.110553 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.110561 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:08.110566 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:08.110629 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:08.135732 2974151 cri.go:89] found id: ""
	I1217 10:50:08.135746 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.135754 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:08.135760 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:08.135828 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:08.162274 2974151 cri.go:89] found id: ""
	I1217 10:50:08.162289 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.162296 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:08.162302 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:08.162359 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:08.187522 2974151 cri.go:89] found id: ""
	I1217 10:50:08.187536 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.187543 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:08.187549 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:08.187618 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:08.212868 2974151 cri.go:89] found id: ""
	I1217 10:50:08.212883 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.212890 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:08.212896 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:08.212958 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:08.236894 2974151 cri.go:89] found id: ""
	I1217 10:50:08.236908 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.236915 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:08.236921 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:08.236981 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:08.262293 2974151 cri.go:89] found id: ""
	I1217 10:50:08.262308 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.262315 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:08.262322 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:08.262332 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:08.320099 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:08.320118 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:08.337595 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:08.337611 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:08.404535 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:08.395902   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.396655   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.398294   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.398971   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.400705   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:08.395902   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.396655   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.398294   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.398971   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.400705   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:08.404545 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:08.404557 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:08.467318 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:08.467338 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:11.014160 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:11.025076 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:11.025146 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:11.050236 2974151 cri.go:89] found id: ""
	I1217 10:50:11.050252 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.050260 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:11.050265 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:11.050329 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:11.081289 2974151 cri.go:89] found id: ""
	I1217 10:50:11.081311 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.081318 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:11.081324 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:11.081385 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:11.111117 2974151 cri.go:89] found id: ""
	I1217 10:50:11.111134 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.111141 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:11.111146 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:11.111209 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:11.137886 2974151 cri.go:89] found id: ""
	I1217 10:50:11.137900 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.137908 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:11.137913 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:11.137972 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:11.164080 2974151 cri.go:89] found id: ""
	I1217 10:50:11.164096 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.164104 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:11.164119 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:11.164183 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:11.194241 2974151 cri.go:89] found id: ""
	I1217 10:50:11.194256 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.194264 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:11.194269 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:11.194331 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:11.220644 2974151 cri.go:89] found id: ""
	I1217 10:50:11.220659 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.220666 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:11.220673 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:11.220687 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:11.283052 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:11.283070 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:11.310700 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:11.310717 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:11.366749 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:11.366769 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:11.383957 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:11.383975 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:11.451001 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:11.442629   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.443048   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.444733   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.445416   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.447157   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:11.442629   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.443048   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.444733   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.445416   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.447157   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:13.952741 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:13.962784 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:13.962846 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:13.987247 2974151 cri.go:89] found id: ""
	I1217 10:50:13.987262 2974151 logs.go:282] 0 containers: []
	W1217 10:50:13.987269 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:13.987274 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:13.987340 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:14.012962 2974151 cri.go:89] found id: ""
	I1217 10:50:14.012977 2974151 logs.go:282] 0 containers: []
	W1217 10:50:14.012984 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:14.012990 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:14.013058 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:14.038181 2974151 cri.go:89] found id: ""
	I1217 10:50:14.038195 2974151 logs.go:282] 0 containers: []
	W1217 10:50:14.038203 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:14.038208 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:14.038266 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:14.062700 2974151 cri.go:89] found id: ""
	I1217 10:50:14.062715 2974151 logs.go:282] 0 containers: []
	W1217 10:50:14.062723 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:14.062728 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:14.062785 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:14.093364 2974151 cri.go:89] found id: ""
	I1217 10:50:14.093386 2974151 logs.go:282] 0 containers: []
	W1217 10:50:14.093393 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:14.093399 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:14.093457 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:14.118504 2974151 cri.go:89] found id: ""
	I1217 10:50:14.118519 2974151 logs.go:282] 0 containers: []
	W1217 10:50:14.118525 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:14.118531 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:14.118596 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:14.143182 2974151 cri.go:89] found id: ""
	I1217 10:50:14.143198 2974151 logs.go:282] 0 containers: []
	W1217 10:50:14.143204 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:14.143212 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:14.143223 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:14.201003 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:14.201024 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:14.218136 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:14.218153 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:14.291347 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:14.280379   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.281686   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.285094   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.285633   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.287421   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:14.280379   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.281686   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.285094   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.285633   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.287421   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:14.291358 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:14.291370 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:14.354518 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:14.354541 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:16.888907 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:16.899327 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:16.899396 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:16.924553 2974151 cri.go:89] found id: ""
	I1217 10:50:16.924572 2974151 logs.go:282] 0 containers: []
	W1217 10:50:16.924580 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:16.924586 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:16.924646 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:16.950729 2974151 cri.go:89] found id: ""
	I1217 10:50:16.950743 2974151 logs.go:282] 0 containers: []
	W1217 10:50:16.950750 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:16.950756 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:16.950811 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:16.978167 2974151 cri.go:89] found id: ""
	I1217 10:50:16.978181 2974151 logs.go:282] 0 containers: []
	W1217 10:50:16.978189 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:16.978193 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:16.978254 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:17.005223 2974151 cri.go:89] found id: ""
	I1217 10:50:17.005239 2974151 logs.go:282] 0 containers: []
	W1217 10:50:17.005247 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:17.005253 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:17.005336 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:17.031301 2974151 cri.go:89] found id: ""
	I1217 10:50:17.031315 2974151 logs.go:282] 0 containers: []
	W1217 10:50:17.031323 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:17.031328 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:17.031393 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:17.058782 2974151 cri.go:89] found id: ""
	I1217 10:50:17.058796 2974151 logs.go:282] 0 containers: []
	W1217 10:50:17.058804 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:17.058810 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:17.058869 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:17.084580 2974151 cri.go:89] found id: ""
	I1217 10:50:17.084595 2974151 logs.go:282] 0 containers: []
	W1217 10:50:17.084603 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:17.084611 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:17.084628 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:17.144045 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:17.144067 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:17.161459 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:17.161476 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:17.230344 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:17.221052   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.221467   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.224663   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.225044   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.226301   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:17.221052   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.221467   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.224663   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.225044   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.226301   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:17.230353 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:17.230364 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:17.292978 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:17.292998 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:19.828581 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:19.838853 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:19.838914 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:19.864198 2974151 cri.go:89] found id: ""
	I1217 10:50:19.864213 2974151 logs.go:282] 0 containers: []
	W1217 10:50:19.864220 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:19.864225 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:19.864284 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:19.899721 2974151 cri.go:89] found id: ""
	I1217 10:50:19.899735 2974151 logs.go:282] 0 containers: []
	W1217 10:50:19.899758 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:19.899764 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:19.899837 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:19.928330 2974151 cri.go:89] found id: ""
	I1217 10:50:19.928345 2974151 logs.go:282] 0 containers: []
	W1217 10:50:19.928352 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:19.928356 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:19.928445 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:19.954497 2974151 cri.go:89] found id: ""
	I1217 10:50:19.954514 2974151 logs.go:282] 0 containers: []
	W1217 10:50:19.954538 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:19.954545 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:19.954608 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:19.980091 2974151 cri.go:89] found id: ""
	I1217 10:50:19.980105 2974151 logs.go:282] 0 containers: []
	W1217 10:50:19.980112 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:19.980118 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:19.980184 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:20.010659 2974151 cri.go:89] found id: ""
	I1217 10:50:20.010676 2974151 logs.go:282] 0 containers: []
	W1217 10:50:20.010685 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:20.010691 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:20.010767 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:20.043088 2974151 cri.go:89] found id: ""
	I1217 10:50:20.043104 2974151 logs.go:282] 0 containers: []
	W1217 10:50:20.043113 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:20.043121 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:20.043132 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:20.100529 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:20.100550 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:20.118575 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:20.118591 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:20.187144 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:20.178717   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.179517   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.181042   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.181412   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.182990   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:20.178717   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.179517   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.181042   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.181412   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.182990   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:20.187155 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:20.187167 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:20.249393 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:20.249414 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:22.778795 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:22.790536 2974151 kubeadm.go:602] duration metric: took 4m2.042602584s to restartPrimaryControlPlane
	W1217 10:50:22.790601 2974151 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 10:50:22.790675 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 10:50:23.205315 2974151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 10:50:23.219008 2974151 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 10:50:23.227117 2974151 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 10:50:23.227176 2974151 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 10:50:23.235370 2974151 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 10:50:23.235380 2974151 kubeadm.go:158] found existing configuration files:
	
	I1217 10:50:23.235436 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 10:50:23.243539 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 10:50:23.243597 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 10:50:23.251153 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 10:50:23.259288 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 10:50:23.259364 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 10:50:23.267370 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 10:50:23.275727 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 10:50:23.275787 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 10:50:23.283930 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 10:50:23.292280 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 10:50:23.292340 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 10:50:23.300010 2974151 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 10:50:23.340550 2974151 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 10:50:23.340717 2974151 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 10:50:23.412202 2974151 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 10:50:23.412287 2974151 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 10:50:23.412322 2974151 kubeadm.go:319] OS: Linux
	I1217 10:50:23.412377 2974151 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 10:50:23.412441 2974151 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 10:50:23.412489 2974151 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 10:50:23.412536 2974151 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 10:50:23.412585 2974151 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 10:50:23.412632 2974151 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 10:50:23.412677 2974151 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 10:50:23.412724 2974151 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 10:50:23.412769 2974151 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 10:50:23.486890 2974151 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 10:50:23.486989 2974151 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 10:50:23.487074 2974151 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 10:50:23.492949 2974151 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 10:50:23.496478 2974151 out.go:252]   - Generating certificates and keys ...
	I1217 10:50:23.496568 2974151 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 10:50:23.496637 2974151 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 10:50:23.496718 2974151 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 10:50:23.496782 2974151 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 10:50:23.496856 2974151 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 10:50:23.496912 2974151 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 10:50:23.496979 2974151 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 10:50:23.497043 2974151 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 10:50:23.497122 2974151 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 10:50:23.497199 2974151 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 10:50:23.497239 2974151 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 10:50:23.497303 2974151 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 10:50:23.659882 2974151 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 10:50:23.806390 2974151 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 10:50:23.994170 2974151 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 10:50:24.254389 2974151 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 10:50:24.616203 2974151 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 10:50:24.616885 2974151 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 10:50:24.619452 2974151 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 10:50:24.622875 2974151 out.go:252]   - Booting up control plane ...
	I1217 10:50:24.622979 2974151 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 10:50:24.623060 2974151 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 10:50:24.623134 2974151 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 10:50:24.643299 2974151 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 10:50:24.643404 2974151 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 10:50:24.652837 2974151 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 10:50:24.652937 2974151 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 10:50:24.652975 2974151 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 10:50:24.787245 2974151 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 10:50:24.787354 2974151 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 10:54:24.787078 2974151 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000331472s
	I1217 10:54:24.787103 2974151 kubeadm.go:319] 
	I1217 10:54:24.787156 2974151 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 10:54:24.787187 2974151 kubeadm.go:319] 	- The kubelet is not running
	I1217 10:54:24.787285 2974151 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 10:54:24.787290 2974151 kubeadm.go:319] 
	I1217 10:54:24.787387 2974151 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 10:54:24.787416 2974151 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 10:54:24.787445 2974151 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 10:54:24.787448 2974151 kubeadm.go:319] 
	I1217 10:54:24.791515 2974151 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 10:54:24.791934 2974151 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 10:54:24.792041 2974151 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 10:54:24.792274 2974151 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 10:54:24.792279 2974151 kubeadm.go:319] 
	I1217 10:54:24.792347 2974151 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1217 10:54:24.792486 2974151 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000331472s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 10:54:24.792573 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 10:54:25.209097 2974151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 10:54:25.222902 2974151 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 10:54:25.222960 2974151 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 10:54:25.231173 2974151 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 10:54:25.231182 2974151 kubeadm.go:158] found existing configuration files:
	
	I1217 10:54:25.231234 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 10:54:25.239239 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 10:54:25.239293 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 10:54:25.246851 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 10:54:25.254681 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 10:54:25.254734 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 10:54:25.262252 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 10:54:25.270359 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 10:54:25.270417 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 10:54:25.277936 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 10:54:25.286063 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 10:54:25.286121 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 10:54:25.293834 2974151 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 10:54:25.333226 2974151 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 10:54:25.333620 2974151 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 10:54:25.403386 2974151 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 10:54:25.403450 2974151 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 10:54:25.403488 2974151 kubeadm.go:319] OS: Linux
	I1217 10:54:25.403533 2974151 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 10:54:25.403579 2974151 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 10:54:25.403625 2974151 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 10:54:25.403672 2974151 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 10:54:25.403719 2974151 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 10:54:25.403765 2974151 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 10:54:25.403809 2974151 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 10:54:25.403855 2974151 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 10:54:25.403900 2974151 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 10:54:25.478252 2974151 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 10:54:25.478355 2974151 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 10:54:25.478445 2974151 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 10:54:25.483628 2974151 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 10:54:25.487136 2974151 out.go:252]   - Generating certificates and keys ...
	I1217 10:54:25.487234 2974151 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 10:54:25.487310 2974151 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 10:54:25.487433 2974151 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 10:54:25.487529 2974151 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 10:54:25.487605 2974151 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 10:54:25.487662 2974151 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 10:54:25.487729 2974151 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 10:54:25.487795 2974151 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 10:54:25.487917 2974151 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 10:54:25.487994 2974151 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 10:54:25.488380 2974151 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 10:54:25.488481 2974151 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 10:54:26.117291 2974151 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 10:54:26.756756 2974151 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 10:54:27.066378 2974151 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 10:54:27.235545 2974151 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 10:54:27.468773 2974151 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 10:54:27.469453 2974151 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 10:54:27.472021 2974151 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 10:54:27.475042 2974151 out.go:252]   - Booting up control plane ...
	I1217 10:54:27.475141 2974151 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 10:54:27.475225 2974151 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 10:54:27.475306 2974151 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 10:54:27.497360 2974151 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 10:54:27.497461 2974151 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 10:54:27.505167 2974151 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 10:54:27.506337 2974151 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 10:54:27.506384 2974151 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 10:54:27.645391 2974151 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 10:54:27.645508 2974151 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 10:58:27.644872 2974151 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000353032s
	I1217 10:58:27.644897 2974151 kubeadm.go:319] 
	I1217 10:58:27.644952 2974151 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 10:58:27.644984 2974151 kubeadm.go:319] 	- The kubelet is not running
	I1217 10:58:27.645087 2974151 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 10:58:27.645092 2974151 kubeadm.go:319] 
	I1217 10:58:27.645195 2974151 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 10:58:27.645226 2974151 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 10:58:27.645255 2974151 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 10:58:27.645258 2974151 kubeadm.go:319] 
	I1217 10:58:27.649050 2974151 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 10:58:27.649524 2974151 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 10:58:27.649634 2974151 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 10:58:27.649875 2974151 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 10:58:27.649881 2974151 kubeadm.go:319] 
	I1217 10:58:27.649949 2974151 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 10:58:27.650003 2974151 kubeadm.go:403] duration metric: took 12m6.936466746s to StartCluster
	I1217 10:58:27.650034 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:58:27.650094 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:58:27.678841 2974151 cri.go:89] found id: ""
	I1217 10:58:27.678855 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.678862 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:58:27.678868 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:58:27.678928 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:58:27.704494 2974151 cri.go:89] found id: ""
	I1217 10:58:27.704507 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.704514 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:58:27.704520 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:58:27.704578 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:58:27.729757 2974151 cri.go:89] found id: ""
	I1217 10:58:27.729770 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.729777 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:58:27.729783 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:58:27.729840 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:58:27.757253 2974151 cri.go:89] found id: ""
	I1217 10:58:27.757267 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.757274 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:58:27.757284 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:58:27.757343 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:58:27.781735 2974151 cri.go:89] found id: ""
	I1217 10:58:27.781749 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.781756 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:58:27.781760 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:58:27.781817 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:58:27.806628 2974151 cri.go:89] found id: ""
	I1217 10:58:27.806642 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.806649 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:58:27.806655 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:58:27.806713 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:58:27.831983 2974151 cri.go:89] found id: ""
	I1217 10:58:27.831997 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.832004 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:58:27.832013 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:58:27.832023 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:58:27.889768 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:58:27.889788 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:58:27.906789 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:58:27.906806 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:58:27.971294 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:58:27.963241   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.963807   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.965347   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.965829   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.967335   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:58:27.963241   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.963807   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.965347   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.965829   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.967335   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:58:27.971304 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:58:27.971317 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:58:28.034286 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:58:28.034308 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 10:58:28.076352 2974151 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000353032s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 10:58:28.076384 2974151 out.go:285] * 
	W1217 10:58:28.076460 2974151 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000353032s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 10:58:28.076478 2974151 out.go:285] * 
	W1217 10:58:28.078620 2974151 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 10:58:28.084354 2974151 out.go:203] 
	W1217 10:58:28.086597 2974151 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000353032s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 10:58:28.086645 2974151 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 10:58:28.086668 2974151 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 10:58:28.089656 2974151 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.366987997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367000042Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367054433Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367069325Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367089255Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367101152Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367110883Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367125414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367141668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367171452Z" level=info msg="Connect containerd service"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367467445Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.368062180Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.389242103Z" level=info msg="Start subscribing containerd event"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.389467722Z" level=info msg="Start recovering state"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.389473490Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.390097098Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430326171Z" level=info msg="Start event monitor"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430520850Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430594670Z" level=info msg="Start streaming server"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430655559Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430712788Z" level=info msg="runtime interface starting up..."
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430945234Z" level=info msg="starting plugins..."
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430989147Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.431326009Z" level=info msg="containerd successfully booted in 0.084806s"
	Dec 17 10:46:19 functional-232588 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:58:31.525250   21128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:31.525964   21128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:31.527531   21128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:31.527962   21128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:31.529503   21128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +26.532481] overlayfs: idmapped layers are currently not supported
	[Dec17 09:26] overlayfs: idmapped layers are currently not supported
	[Dec17 09:27] overlayfs: idmapped layers are currently not supported
	[Dec17 09:29] overlayfs: idmapped layers are currently not supported
	[Dec17 09:31] overlayfs: idmapped layers are currently not supported
	[Dec17 09:41] overlayfs: idmapped layers are currently not supported
	[Dec17 09:43] overlayfs: idmapped layers are currently not supported
	[Dec17 09:44] overlayfs: idmapped layers are currently not supported
	[  +5.066669] overlayfs: idmapped layers are currently not supported
	[ +38.827173] overlayfs: idmapped layers are currently not supported
	[Dec17 09:45] overlayfs: idmapped layers are currently not supported
	[Dec17 09:46] overlayfs: idmapped layers are currently not supported
	[Dec17 09:48] overlayfs: idmapped layers are currently not supported
	[  +5.468161] overlayfs: idmapped layers are currently not supported
	[Dec17 09:49] overlayfs: idmapped layers are currently not supported
	[  +4.263444] overlayfs: idmapped layers are currently not supported
	[Dec17 09:50] overlayfs: idmapped layers are currently not supported
	[Dec17 10:07] overlayfs: idmapped layers are currently not supported
	[Dec17 10:08] overlayfs: idmapped layers are currently not supported
	[Dec17 10:10] overlayfs: idmapped layers are currently not supported
	[Dec17 10:11] overlayfs: idmapped layers are currently not supported
	[Dec17 10:13] overlayfs: idmapped layers are currently not supported
	[Dec17 10:15] overlayfs: idmapped layers are currently not supported
	[Dec17 10:16] overlayfs: idmapped layers are currently not supported
	[Dec17 10:21] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 10:58:31 up 16:41,  0 user,  load average: 0.55, 0.27, 0.47
	Linux functional-232588 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 10:58:28 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:58:28 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 17 10:58:28 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:58:28 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:58:28 functional-232588 kubelet[20906]: E1217 10:58:28.827036   20906 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:58:28 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:58:28 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:58:29 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 17 10:58:29 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:58:29 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:58:29 functional-232588 kubelet[21003]: E1217 10:58:29.567884   21003 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:58:29 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:58:29 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:58:30 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 17 10:58:30 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:58:30 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:58:30 functional-232588 kubelet[21021]: E1217 10:58:30.249697   21021 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:58:30 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:58:30 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 10:58:30 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 17 10:58:30 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:58:31 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 10:58:31 functional-232588 kubelet[21044]: E1217 10:58:31.064170   21044 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 10:58:31 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 10:58:31 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588: exit status 2 (349.669053ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-232588" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (2.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-232588 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-232588 apply -f testdata/invalidsvc.yaml: exit status 1 (55.114603ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-232588 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (1.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-232588 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-232588 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-232588 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-232588 --alsologtostderr -v=1] stderr:
I1217 11:00:51.387597 2991539 out.go:360] Setting OutFile to fd 1 ...
I1217 11:00:51.387713 2991539 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:00:51.387725 2991539 out.go:374] Setting ErrFile to fd 2...
I1217 11:00:51.387731 2991539 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:00:51.388019 2991539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
I1217 11:00:51.388298 2991539 mustload.go:66] Loading cluster: functional-232588
I1217 11:00:51.388761 2991539 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1217 11:00:51.389236 2991539 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
I1217 11:00:51.406231 2991539 host.go:66] Checking if "functional-232588" exists ...
I1217 11:00:51.406543 2991539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1217 11:00:51.460304 2991539 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:00:51.450973196 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1217 11:00:51.460458 2991539 api_server.go:166] Checking apiserver status ...
I1217 11:00:51.460522 2991539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1217 11:00:51.460566 2991539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
I1217 11:00:51.478584 2991539 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
W1217 11:00:51.582294 2991539 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1217 11:00:51.585750 2991539 out.go:179] * The control-plane node functional-232588 apiserver is not running: (state=Stopped)
I1217 11:00:51.588687 2991539 out.go:179]   To start a cluster, run: "minikube start -p functional-232588"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-232588
helpers_test.go:244: (dbg) docker inspect functional-232588:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55",
	        "Created": "2025-12-17T10:31:38.417629873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2962990,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T10:31:38.484538313Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/hostname",
	        "HostsPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/hosts",
	        "LogPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55-json.log",
	        "Name": "/functional-232588",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-232588:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-232588",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55",
	                "LowerDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813-init/diff:/var/lib/docker/overlay2/aa1c3cb837db05afa9c265c464cc269fa9c11658f422c1c8858e1287ac952f12/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-232588",
	                "Source": "/var/lib/docker/volumes/functional-232588/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-232588",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-232588",
	                "name.minikube.sigs.k8s.io": "functional-232588",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cf91fdea6bf1c59282af014fad74b29d2456698ebca9b6be8c9685054b7d7df4",
	            "SandboxKey": "/var/run/docker/netns/cf91fdea6bf1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35733"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35734"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35737"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35735"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35736"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-232588": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:06:f9:5f:98:3e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf7a63e100f8f97f0cb760b53960a5eaeb1f5054bead79442486fc1d51c01ab7",
	                    "EndpointID": "a3f4d6de946fb68269c7790dce129934f895a840ec5cebbe87fc0d49cb575c44",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-232588",
	                        "f67a3fa8da99"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-232588 -n functional-232588
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232588 -n functional-232588: exit status 2 (309.57566ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-232588 service hello-node --url --format={{.IP}}                                                                                        │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ service   │ functional-232588 service hello-node --url                                                                                                         │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ mount     │ -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun345243528/001:/mount-9p --alsologtostderr -v=1              │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ ssh       │ functional-232588 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ ssh       │ functional-232588 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ ssh       │ functional-232588 ssh -- ls -la /mount-9p                                                                                                          │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ ssh       │ functional-232588 ssh cat /mount-9p/test-1765969241578179780                                                                                       │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ ssh       │ functional-232588 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                   │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ ssh       │ functional-232588 ssh sudo umount -f /mount-9p                                                                                                     │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ mount     │ -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun828762534/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ ssh       │ functional-232588 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ ssh       │ functional-232588 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ ssh       │ functional-232588 ssh -- ls -la /mount-9p                                                                                                          │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ ssh       │ functional-232588 ssh sudo umount -f /mount-9p                                                                                                     │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ mount     │ -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3882644013/001:/mount1 --alsologtostderr -v=1               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ ssh       │ functional-232588 ssh findmnt -T /mount1                                                                                                           │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ mount     │ -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3882644013/001:/mount2 --alsologtostderr -v=1               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ mount     │ -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3882644013/001:/mount3 --alsologtostderr -v=1               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ ssh       │ functional-232588 ssh findmnt -T /mount2                                                                                                           │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ ssh       │ functional-232588 ssh findmnt -T /mount3                                                                                                           │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ mount     │ -p functional-232588 --kill=true                                                                                                                   │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ start     │ -p functional-232588 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1  │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ start     │ -p functional-232588 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1  │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ start     │ -p functional-232588 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1            │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-232588 --alsologtostderr -v=1                                                                                     │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	└───────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:00:51
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:00:51.150282 2991469 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:00:51.150465 2991469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:00:51.150489 2991469 out.go:374] Setting ErrFile to fd 2...
	I1217 11:00:51.150518 2991469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:00:51.150824 2991469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 11:00:51.151249 2991469 out.go:368] Setting JSON to false
	I1217 11:00:51.152238 2991469 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":60202,"bootTime":1765909050,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 11:00:51.152357 2991469 start.go:143] virtualization:  
	I1217 11:00:51.155844 2991469 out.go:179] * [functional-232588] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 11:00:51.158983 2991469 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 11:00:51.159071 2991469 notify.go:221] Checking for updates...
	I1217 11:00:51.165084 2991469 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:00:51.167975 2991469 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 11:00:51.170864 2991469 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 11:00:51.173758 2991469 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 11:00:51.176681 2991469 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:00:51.180035 2991469 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:00:51.180790 2991469 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:00:51.212640 2991469 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 11:00:51.212760 2991469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:00:51.268928 2991469 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:00:51.260149135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:00:51.269027 2991469 docker.go:319] overlay module found
	I1217 11:00:51.272025 2991469 out.go:179] * Using the docker driver based on existing profile
	I1217 11:00:51.274918 2991469 start.go:309] selected driver: docker
	I1217 11:00:51.274941 2991469 start.go:927] validating driver "docker" against &{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:00:51.275036 2991469 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:00:51.275164 2991469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:00:51.332206 2991469 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:00:51.323192574 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:00:51.332741 2991469 cni.go:84] Creating CNI manager for ""
	I1217 11:00:51.332806 2991469 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 11:00:51.332850 2991469 start.go:353] cluster config:
	{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:00:51.335859 2991469 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.366987997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367000042Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367054433Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367069325Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367089255Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367101152Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367110883Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367125414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367141668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367171452Z" level=info msg="Connect containerd service"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367467445Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.368062180Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.389242103Z" level=info msg="Start subscribing containerd event"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.389467722Z" level=info msg="Start recovering state"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.389473490Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.390097098Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430326171Z" level=info msg="Start event monitor"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430520850Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430594670Z" level=info msg="Start streaming server"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430655559Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430712788Z" level=info msg="runtime interface starting up..."
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430945234Z" level=info msg="starting plugins..."
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430989147Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.431326009Z" level=info msg="containerd successfully booted in 0.084806s"
	Dec 17 10:46:19 functional-232588 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 11:00:52.626703   23330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 11:00:52.627112   23330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 11:00:52.628838   23330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 11:00:52.629548   23330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 11:00:52.631159   23330 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +26.532481] overlayfs: idmapped layers are currently not supported
	[Dec17 09:26] overlayfs: idmapped layers are currently not supported
	[Dec17 09:27] overlayfs: idmapped layers are currently not supported
	[Dec17 09:29] overlayfs: idmapped layers are currently not supported
	[Dec17 09:31] overlayfs: idmapped layers are currently not supported
	[Dec17 09:41] overlayfs: idmapped layers are currently not supported
	[Dec17 09:43] overlayfs: idmapped layers are currently not supported
	[Dec17 09:44] overlayfs: idmapped layers are currently not supported
	[  +5.066669] overlayfs: idmapped layers are currently not supported
	[ +38.827173] overlayfs: idmapped layers are currently not supported
	[Dec17 09:45] overlayfs: idmapped layers are currently not supported
	[Dec17 09:46] overlayfs: idmapped layers are currently not supported
	[Dec17 09:48] overlayfs: idmapped layers are currently not supported
	[  +5.468161] overlayfs: idmapped layers are currently not supported
	[Dec17 09:49] overlayfs: idmapped layers are currently not supported
	[  +4.263444] overlayfs: idmapped layers are currently not supported
	[Dec17 09:50] overlayfs: idmapped layers are currently not supported
	[Dec17 10:07] overlayfs: idmapped layers are currently not supported
	[Dec17 10:08] overlayfs: idmapped layers are currently not supported
	[Dec17 10:10] overlayfs: idmapped layers are currently not supported
	[Dec17 10:11] overlayfs: idmapped layers are currently not supported
	[Dec17 10:13] overlayfs: idmapped layers are currently not supported
	[Dec17 10:15] overlayfs: idmapped layers are currently not supported
	[Dec17 10:16] overlayfs: idmapped layers are currently not supported
	[Dec17 10:21] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 11:00:52 up 16:43,  0 user,  load average: 0.93, 0.37, 0.47
	Linux functional-232588 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 11:00:49 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:00:49 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 509.
	Dec 17 11:00:49 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:50 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:50 functional-232588 kubelet[23190]: E1217 11:00:50.090508   23190 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:00:50 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:00:50 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:00:50 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 510.
	Dec 17 11:00:50 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:50 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:50 functional-232588 kubelet[23210]: E1217 11:00:50.823481   23210 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:00:50 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:00:50 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:00:51 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 511.
	Dec 17 11:00:51 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:51 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:51 functional-232588 kubelet[23217]: E1217 11:00:51.556948   23217 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:00:51 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:00:51 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:00:52 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 512.
	Dec 17 11:00:52 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:52 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:52 functional-232588 kubelet[23252]: E1217 11:00:52.312063   23252 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:00:52 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:00:52 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588: exit status 2 (335.351786ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-232588" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (1.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (3.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232588 status: exit status 2 (341.105948ms)

                                                
                                                
-- stdout --
	functional-232588
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-232588 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232588 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (313.285608ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-232588 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232588 status -o json: exit status 2 (351.219827ms)

                                                
                                                
-- stdout --
	{"Name":"functional-232588","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-232588 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-232588
helpers_test.go:244: (dbg) docker inspect functional-232588:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55",
	        "Created": "2025-12-17T10:31:38.417629873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2962990,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T10:31:38.484538313Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/hostname",
	        "HostsPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/hosts",
	        "LogPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55-json.log",
	        "Name": "/functional-232588",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-232588:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-232588",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55",
	                "LowerDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813-init/diff:/var/lib/docker/overlay2/aa1c3cb837db05afa9c265c464cc269fa9c11658f422c1c8858e1287ac952f12/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-232588",
	                "Source": "/var/lib/docker/volumes/functional-232588/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-232588",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-232588",
	                "name.minikube.sigs.k8s.io": "functional-232588",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cf91fdea6bf1c59282af014fad74b29d2456698ebca9b6be8c9685054b7d7df4",
	            "SandboxKey": "/var/run/docker/netns/cf91fdea6bf1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35733"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35734"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35737"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35735"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35736"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-232588": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:06:f9:5f:98:3e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf7a63e100f8f97f0cb760b53960a5eaeb1f5054bead79442486fc1d51c01ab7",
	                    "EndpointID": "a3f4d6de946fb68269c7790dce129934f895a840ec5cebbe87fc0d49cb575c44",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-232588",
	                        "f67a3fa8da99"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-232588 -n functional-232588
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232588 -n functional-232588: exit status 2 (311.201851ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ functional-232588 addons list -o json                                                                                                              │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ service │ functional-232588 service list                                                                                                                     │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ service │ functional-232588 service list -o json                                                                                                             │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ service │ functional-232588 service --namespace=default --https --url hello-node                                                                             │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ service │ functional-232588 service hello-node --url --format={{.IP}}                                                                                        │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ service │ functional-232588 service hello-node --url                                                                                                         │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ mount   │ -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun345243528/001:/mount-9p --alsologtostderr -v=1              │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ ssh     │ functional-232588 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ ssh     │ functional-232588 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ ssh     │ functional-232588 ssh -- ls -la /mount-9p                                                                                                          │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ ssh     │ functional-232588 ssh cat /mount-9p/test-1765969241578179780                                                                                       │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ ssh     │ functional-232588 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                   │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ ssh     │ functional-232588 ssh sudo umount -f /mount-9p                                                                                                     │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ mount   │ -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun828762534/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ ssh     │ functional-232588 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ ssh     │ functional-232588 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ ssh     │ functional-232588 ssh -- ls -la /mount-9p                                                                                                          │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ ssh     │ functional-232588 ssh sudo umount -f /mount-9p                                                                                                     │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ mount   │ -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3882644013/001:/mount1 --alsologtostderr -v=1               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ ssh     │ functional-232588 ssh findmnt -T /mount1                                                                                                           │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ mount   │ -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3882644013/001:/mount2 --alsologtostderr -v=1               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ mount   │ -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3882644013/001:/mount3 --alsologtostderr -v=1               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ ssh     │ functional-232588 ssh findmnt -T /mount2                                                                                                           │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ ssh     │ functional-232588 ssh findmnt -T /mount3                                                                                                           │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ mount   │ -p functional-232588 --kill=true                                                                                                                   │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 10:46:16
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 10:46:16.812860 2974151 out.go:360] Setting OutFile to fd 1 ...
	I1217 10:46:16.812963 2974151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:46:16.813007 2974151 out.go:374] Setting ErrFile to fd 2...
	I1217 10:46:16.813012 2974151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:46:16.813266 2974151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 10:46:16.813634 2974151 out.go:368] Setting JSON to false
	I1217 10:46:16.814461 2974151 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":59327,"bootTime":1765909050,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 10:46:16.814519 2974151 start.go:143] virtualization:  
	I1217 10:46:16.818066 2974151 out.go:179] * [functional-232588] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 10:46:16.822068 2974151 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 10:46:16.822151 2974151 notify.go:221] Checking for updates...
	I1217 10:46:16.828253 2974151 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 10:46:16.831316 2974151 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:46:16.834373 2974151 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 10:46:16.837375 2974151 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 10:46:16.840310 2974151 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 10:46:16.843753 2974151 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 10:46:16.843853 2974151 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 10:46:16.873076 2974151 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 10:46:16.873190 2974151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:46:16.938275 2974151 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 10:46:16.928760564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:46:16.938365 2974151 docker.go:319] overlay module found
	I1217 10:46:16.941603 2974151 out.go:179] * Using the docker driver based on existing profile
	I1217 10:46:16.944540 2974151 start.go:309] selected driver: docker
	I1217 10:46:16.944578 2974151 start.go:927] validating driver "docker" against &{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:46:16.944677 2974151 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 10:46:16.944788 2974151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:46:17.021027 2974151 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 10:46:17.010774366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:46:17.021436 2974151 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 10:46:17.021458 2974151 cni.go:84] Creating CNI manager for ""
	I1217 10:46:17.021510 2974151 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 10:46:17.021561 2974151 start.go:353] cluster config:
	{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:46:17.024793 2974151 out.go:179] * Starting "functional-232588" primary control-plane node in "functional-232588" cluster
	I1217 10:46:17.027565 2974151 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 10:46:17.030993 2974151 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 10:46:17.033790 2974151 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 10:46:17.033824 2974151 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1217 10:46:17.033833 2974151 cache.go:65] Caching tarball of preloaded images
	I1217 10:46:17.033918 2974151 preload.go:238] Found /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 10:46:17.033926 2974151 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1217 10:46:17.034031 2974151 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/config.json ...
	I1217 10:46:17.034251 2974151 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 10:46:17.058099 2974151 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 10:46:17.058112 2974151 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 10:46:17.058125 2974151 cache.go:243] Successfully downloaded all kic artifacts
	I1217 10:46:17.058155 2974151 start.go:360] acquireMachinesLock for functional-232588: {Name:mkb7828f32963a62377c74058da795e63eb677f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 10:46:17.058219 2974151 start.go:364] duration metric: took 48.59µs to acquireMachinesLock for "functional-232588"
	I1217 10:46:17.058239 2974151 start.go:96] Skipping create...Using existing machine configuration
	I1217 10:46:17.058243 2974151 fix.go:54] fixHost starting: 
	I1217 10:46:17.058504 2974151 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:46:17.079212 2974151 fix.go:112] recreateIfNeeded on functional-232588: state=Running err=<nil>
	W1217 10:46:17.079241 2974151 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 10:46:17.082582 2974151 out.go:252] * Updating the running docker "functional-232588" container ...
	I1217 10:46:17.082612 2974151 machine.go:94] provisionDockerMachine start ...
	I1217 10:46:17.082696 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:17.100077 2974151 main.go:143] libmachine: Using SSH client type: native
	I1217 10:46:17.100208 2974151 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:46:17.100214 2974151 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 10:46:17.228063 2974151 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232588
	
	I1217 10:46:17.228077 2974151 ubuntu.go:182] provisioning hostname "functional-232588"
	I1217 10:46:17.228138 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:17.245852 2974151 main.go:143] libmachine: Using SSH client type: native
	I1217 10:46:17.245963 2974151 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:46:17.245971 2974151 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-232588 && echo "functional-232588" | sudo tee /etc/hostname
	I1217 10:46:17.390208 2974151 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232588
	
	I1217 10:46:17.390287 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:17.409213 2974151 main.go:143] libmachine: Using SSH client type: native
	I1217 10:46:17.409321 2974151 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:46:17.409335 2974151 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-232588' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-232588/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-232588' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 10:46:17.545048 2974151 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 10:46:17.545065 2974151 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 10:46:17.545093 2974151 ubuntu.go:190] setting up certificates
	I1217 10:46:17.545101 2974151 provision.go:84] configureAuth start
	I1217 10:46:17.545170 2974151 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232588
	I1217 10:46:17.563036 2974151 provision.go:143] copyHostCerts
	I1217 10:46:17.563100 2974151 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 10:46:17.563107 2974151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 10:46:17.563182 2974151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 10:46:17.563277 2974151 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 10:46:17.563281 2974151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 10:46:17.563306 2974151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 10:46:17.563356 2974151 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 10:46:17.563359 2974151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 10:46:17.563381 2974151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 10:46:17.563426 2974151 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.functional-232588 san=[127.0.0.1 192.168.49.2 functional-232588 localhost minikube]
	I1217 10:46:17.716164 2974151 provision.go:177] copyRemoteCerts
	I1217 10:46:17.716219 2974151 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 10:46:17.716261 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:17.737388 2974151 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:46:17.836120 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 10:46:17.853626 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 10:46:17.870501 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 10:46:17.888326 2974151 provision.go:87] duration metric: took 343.201911ms to configureAuth
	I1217 10:46:17.888344 2974151 ubuntu.go:206] setting minikube options for container-runtime
	I1217 10:46:17.888621 2974151 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 10:46:17.888627 2974151 machine.go:97] duration metric: took 806.010876ms to provisionDockerMachine
	I1217 10:46:17.888635 2974151 start.go:293] postStartSetup for "functional-232588" (driver="docker")
	I1217 10:46:17.888646 2974151 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 10:46:17.888710 2974151 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 10:46:17.888750 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:17.905996 2974151 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:46:18.000491 2974151 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 10:46:18.012109 2974151 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 10:46:18.012146 2974151 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 10:46:18.012158 2974151 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 10:46:18.012224 2974151 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 10:46:18.012302 2974151 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 10:46:18.012378 2974151 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts -> hosts in /etc/test/nested/copy/2924574
	I1217 10:46:18.012531 2974151 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2924574
	I1217 10:46:18.021349 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 10:46:18.041286 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts --> /etc/test/nested/copy/2924574/hosts (40 bytes)
	I1217 10:46:18.060319 2974151 start.go:296] duration metric: took 171.669118ms for postStartSetup
	I1217 10:46:18.060436 2974151 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 10:46:18.060478 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:18.080470 2974151 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:46:18.173527 2974151 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 10:46:18.178353 2974151 fix.go:56] duration metric: took 1.120102504s for fixHost
	I1217 10:46:18.178370 2974151 start.go:83] releasing machines lock for "functional-232588", held for 1.120143316s
	I1217 10:46:18.178439 2974151 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232588
	I1217 10:46:18.195096 2974151 ssh_runner.go:195] Run: cat /version.json
	I1217 10:46:18.195136 2974151 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 10:46:18.195139 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:18.195194 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:18.218089 2974151 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:46:18.224561 2974151 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:46:18.312237 2974151 ssh_runner.go:195] Run: systemctl --version
	I1217 10:46:18.401982 2974151 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 10:46:18.406442 2974151 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 10:46:18.406503 2974151 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 10:46:18.414452 2974151 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 10:46:18.414475 2974151 start.go:496] detecting cgroup driver to use...
	I1217 10:46:18.414504 2974151 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 10:46:18.414555 2974151 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 10:46:18.437080 2974151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 10:46:18.453263 2974151 docker.go:218] disabling cri-docker service (if available) ...
	I1217 10:46:18.453314 2974151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 10:46:18.469891 2974151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 10:46:18.484540 2974151 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 10:46:18.608866 2974151 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 10:46:18.727258 2974151 docker.go:234] disabling docker service ...
	I1217 10:46:18.727333 2974151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 10:46:18.742532 2974151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 10:46:18.755933 2974151 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 10:46:18.876736 2974151 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 10:46:18.997189 2974151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 10:46:19.012062 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 10:46:19.033558 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 10:46:19.046193 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 10:46:19.056269 2974151 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 10:46:19.056333 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 10:46:19.066650 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 10:46:19.076242 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 10:46:19.086026 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 10:46:19.095009 2974151 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 10:46:19.103467 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 10:46:19.112970 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 10:46:19.121805 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 10:46:19.131086 2974151 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 10:46:19.139081 2974151 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 10:46:19.146487 2974151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 10:46:19.293215 2974151 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 10:46:19.434655 2974151 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 10:46:19.434715 2974151 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 10:46:19.439246 2974151 start.go:564] Will wait 60s for crictl version
	I1217 10:46:19.439314 2974151 ssh_runner.go:195] Run: which crictl
	I1217 10:46:19.442915 2974151 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 10:46:19.467445 2974151 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 10:46:19.467506 2974151 ssh_runner.go:195] Run: containerd --version
	I1217 10:46:19.489544 2974151 ssh_runner.go:195] Run: containerd --version
	I1217 10:46:19.516185 2974151 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1217 10:46:19.519114 2974151 cli_runner.go:164] Run: docker network inspect functional-232588 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 10:46:19.535732 2974151 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 10:46:19.542843 2974151 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1217 10:46:19.545647 2974151 kubeadm.go:884] updating cluster {Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 10:46:19.545821 2974151 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 10:46:19.545902 2974151 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 10:46:19.570156 2974151 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 10:46:19.570167 2974151 containerd.go:534] Images already preloaded, skipping extraction
	I1217 10:46:19.570223 2974151 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 10:46:19.598013 2974151 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 10:46:19.598025 2974151 cache_images.go:86] Images are preloaded, skipping loading
	I1217 10:46:19.598031 2974151 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1217 10:46:19.598133 2974151 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-232588 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 10:46:19.598195 2974151 ssh_runner.go:195] Run: sudo crictl info
	I1217 10:46:19.628150 2974151 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1217 10:46:19.628169 2974151 cni.go:84] Creating CNI manager for ""
	I1217 10:46:19.628176 2974151 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 10:46:19.628184 2974151 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 10:46:19.628205 2974151 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-232588 NodeName:functional-232588 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubelet
ConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 10:46:19.628313 2974151 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-232588"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 10:46:19.628380 2974151 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 10:46:19.636242 2974151 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 10:46:19.636301 2974151 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 10:46:19.643919 2974151 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1217 10:46:19.658022 2974151 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 10:46:19.670961 2974151 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2085 bytes)
	I1217 10:46:19.684065 2974151 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 10:46:19.687947 2974151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 10:46:19.796384 2974151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 10:46:20.002745 2974151 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588 for IP: 192.168.49.2
	I1217 10:46:20.002759 2974151 certs.go:195] generating shared ca certs ...
	I1217 10:46:20.002799 2974151 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:46:20.002998 2974151 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 10:46:20.003055 2974151 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 10:46:20.003062 2974151 certs.go:257] generating profile certs ...
	I1217 10:46:20.003183 2974151 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.key
	I1217 10:46:20.003236 2974151 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key.a39919a0
	I1217 10:46:20.003288 2974151 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key
	I1217 10:46:20.003444 2974151 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 10:46:20.003480 2974151 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 10:46:20.003508 2974151 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 10:46:20.003545 2974151 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 10:46:20.003577 2974151 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 10:46:20.003610 2974151 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 10:46:20.003665 2974151 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 10:46:20.004449 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 10:46:20.040127 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 10:46:20.065442 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 10:46:20.086611 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 10:46:20.107054 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 10:46:20.126007 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 10:46:20.144078 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 10:46:20.162802 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 10:46:20.181368 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 10:46:20.200073 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 10:46:20.217945 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 10:46:20.235640 2974151 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 10:46:20.248545 2974151 ssh_runner.go:195] Run: openssl version
	I1217 10:46:20.256076 2974151 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 10:46:20.263759 2974151 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 10:46:20.271126 2974151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 10:46:20.274974 2974151 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 10:46:20.275038 2974151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 10:46:20.316429 2974151 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 10:46:20.323945 2974151 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 10:46:20.331201 2974151 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 10:46:20.339536 2974151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 10:46:20.343551 2974151 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 10:46:20.343606 2974151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 10:46:20.384485 2974151 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 10:46:20.391694 2974151 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:46:20.399044 2974151 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 10:46:20.406332 2974151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:46:20.410078 2974151 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:46:20.410134 2974151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:46:20.451203 2974151 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 10:46:20.458641 2974151 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 10:46:20.462247 2974151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 10:46:20.503114 2974151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 10:46:20.544335 2974151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 10:46:20.590045 2974151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 10:46:20.630985 2974151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 10:46:20.672580 2974151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 10:46:20.713547 2974151 kubeadm.go:401] StartCluster: {Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:46:20.713638 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 10:46:20.713707 2974151 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 10:46:20.740007 2974151 cri.go:89] found id: ""
	I1217 10:46:20.740065 2974151 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 10:46:20.747914 2974151 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 10:46:20.747924 2974151 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 10:46:20.747974 2974151 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 10:46:20.757908 2974151 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 10:46:20.758430 2974151 kubeconfig.go:125] found "functional-232588" server: "https://192.168.49.2:8441"
	I1217 10:46:20.761036 2974151 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 10:46:20.769414 2974151 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 10:31:46.081162571 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 10:46:19.676908670 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1217 10:46:20.769441 2974151 kubeadm.go:1161] stopping kube-system containers ...
	I1217 10:46:20.769455 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1217 10:46:20.769528 2974151 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 10:46:20.801226 2974151 cri.go:89] found id: ""
	I1217 10:46:20.801308 2974151 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 10:46:20.820664 2974151 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 10:46:20.829373 2974151 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 17 10:35 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 17 10:35 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 17 10:35 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 17 10:35 /etc/kubernetes/scheduler.conf
	
	I1217 10:46:20.829433 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 10:46:20.837325 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 10:46:20.845308 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 10:46:20.845363 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 10:46:20.853199 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 10:46:20.860841 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 10:46:20.860897 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 10:46:20.868346 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 10:46:20.876151 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 10:46:20.876211 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 10:46:20.883945 2974151 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 10:46:20.892018 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 10:46:20.938748 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 10:46:22.162130 2974151 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.22335875s)
	I1217 10:46:22.162221 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 10:46:22.359829 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 10:46:22.415930 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 10:46:22.468185 2974151 api_server.go:52] waiting for apiserver process to appear ...
	I1217 10:46:22.468265 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:22.969146 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:23.468479 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:23.968514 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:24.468479 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:24.969355 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:25.469200 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:25.969018 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:26.468818 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:26.969109 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:27.468378 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:27.969311 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:28.469065 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:28.969101 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:29.468403 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:29.968443 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:30.468499 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:30.968729 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:31.468355 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:31.968496 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:32.468560 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:32.968509 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:33.469088 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:33.969160 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:34.468498 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:34.968497 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:35.468823 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:35.968410 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:36.469195 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:36.969040 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:37.469267 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:37.969122 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:38.469239 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:38.969263 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:39.469144 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:39.969429 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:40.468520 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:40.968559 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:41.469268 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:41.968407 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:42.469044 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:42.969148 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:43.468399 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:43.968478 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:44.468402 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:44.969211 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:45.469415 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:45.968355 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:46.468347 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:46.969243 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:47.468650 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:47.969320 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:48.469355 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:48.969346 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:49.469299 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:49.968561 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:50.469414 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:50.968570 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:51.468468 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:51.969383 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:52.468402 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:52.969191 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:53.469310 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:53.969186 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:54.469057 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:54.968491 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:55.469204 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:55.968499 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:56.468579 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:56.968537 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:57.468523 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:57.968481 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:58.468521 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:58.969320 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:59.469211 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:59.968498 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:00.468441 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:00.969123 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:01.468956 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:01.969376 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:02.468446 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:02.969237 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:03.468449 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:03.969079 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:04.469054 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:04.968610 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:05.468502 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:05.968334 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:06.469020 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:06.969077 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:07.469052 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:07.968481 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:08.469171 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:08.968586 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:09.469235 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:09.968478 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:10.469198 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:10.968403 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:11.469192 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:11.969439 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:12.469344 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:12.969231 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:13.469196 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:13.969169 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:14.469322 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:14.969138 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:15.469310 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:15.969247 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:16.469080 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:16.968869 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:17.468522 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:17.968551 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:18.468369 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:18.969356 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:19.469354 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:19.969205 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:20.469085 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:20.968997 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:21.468670 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:21.969358 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:22.469259 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:22.469337 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:22.493873 2974151 cri.go:89] found id: ""
	I1217 10:47:22.493887 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.493894 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:22.493901 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:22.493960 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:22.522462 2974151 cri.go:89] found id: ""
	I1217 10:47:22.522476 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.522483 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:22.522488 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:22.522547 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:22.550878 2974151 cri.go:89] found id: ""
	I1217 10:47:22.550892 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.550899 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:22.550904 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:22.550964 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:22.576167 2974151 cri.go:89] found id: ""
	I1217 10:47:22.576181 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.576188 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:22.576193 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:22.576253 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:22.600591 2974151 cri.go:89] found id: ""
	I1217 10:47:22.600605 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.600612 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:22.600617 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:22.600673 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:22.624978 2974151 cri.go:89] found id: ""
	I1217 10:47:22.624992 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.624999 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:22.625005 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:22.625062 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:22.649387 2974151 cri.go:89] found id: ""
	I1217 10:47:22.649401 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.649408 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:22.649415 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:22.649427 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:22.666544 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:22.666563 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:22.733635 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:22.724930   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.725595   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.727257   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.727857   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.729508   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:22.724930   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.725595   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.727257   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.727857   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.729508   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:22.733647 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:22.733658 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:22.802118 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:22.802139 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:22.842645 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:22.842661 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:25.403296 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:25.413370 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:25.413431 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:25.437778 2974151 cri.go:89] found id: ""
	I1217 10:47:25.437792 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.437799 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:25.437804 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:25.437864 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:25.466932 2974151 cri.go:89] found id: ""
	I1217 10:47:25.466946 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.466953 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:25.466959 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:25.467017 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:25.495887 2974151 cri.go:89] found id: ""
	I1217 10:47:25.495901 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.495907 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:25.495912 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:25.495971 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:25.521061 2974151 cri.go:89] found id: ""
	I1217 10:47:25.521075 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.521082 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:25.521087 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:25.521146 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:25.550884 2974151 cri.go:89] found id: ""
	I1217 10:47:25.550898 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.550905 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:25.550910 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:25.550967 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:25.576130 2974151 cri.go:89] found id: ""
	I1217 10:47:25.576145 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.576151 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:25.576156 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:25.576224 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:25.600903 2974151 cri.go:89] found id: ""
	I1217 10:47:25.600916 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.600923 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:25.600931 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:25.600941 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:25.633359 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:25.633375 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:25.689492 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:25.689512 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:25.706643 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:25.706661 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:25.788195 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:25.780730   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.781147   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.782587   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.782886   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.784365   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:25.780730   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.781147   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.782587   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.782886   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.784365   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:25.788207 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:25.788218 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:28.357987 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:28.368310 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:28.368371 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:28.393766 2974151 cri.go:89] found id: ""
	I1217 10:47:28.393789 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.393797 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:28.393803 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:28.393876 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:28.418225 2974151 cri.go:89] found id: ""
	I1217 10:47:28.418240 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.418247 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:28.418253 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:28.418312 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:28.444064 2974151 cri.go:89] found id: ""
	I1217 10:47:28.444083 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.444091 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:28.444096 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:28.444157 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:28.469125 2974151 cri.go:89] found id: ""
	I1217 10:47:28.469139 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.469146 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:28.469152 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:28.469210 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:28.494598 2974151 cri.go:89] found id: ""
	I1217 10:47:28.494614 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.494621 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:28.494627 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:28.494689 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:28.529767 2974151 cri.go:89] found id: ""
	I1217 10:47:28.529781 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.529788 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:28.529793 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:28.529851 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:28.554626 2974151 cri.go:89] found id: ""
	I1217 10:47:28.554640 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.554653 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:28.554661 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:28.554671 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:28.610665 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:28.610693 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:28.627829 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:28.627846 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:28.694227 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:28.685909   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.686688   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.688310   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.688904   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.690427   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:28.685909   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.686688   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.688310   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.688904   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.690427   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:28.694247 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:28.694257 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:28.761980 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:28.761999 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:31.299127 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:31.309358 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:31.309418 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:31.334436 2974151 cri.go:89] found id: ""
	I1217 10:47:31.334450 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.334458 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:31.334463 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:31.334530 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:31.359180 2974151 cri.go:89] found id: ""
	I1217 10:47:31.359195 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.359202 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:31.359207 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:31.359264 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:31.386298 2974151 cri.go:89] found id: ""
	I1217 10:47:31.386312 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.386319 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:31.386324 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:31.386385 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:31.414747 2974151 cri.go:89] found id: ""
	I1217 10:47:31.414762 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.414769 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:31.414774 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:31.414835 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:31.439979 2974151 cri.go:89] found id: ""
	I1217 10:47:31.439993 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.439999 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:31.440005 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:31.440061 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:31.465613 2974151 cri.go:89] found id: ""
	I1217 10:47:31.465628 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.465635 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:31.465641 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:31.465698 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:31.495303 2974151 cri.go:89] found id: ""
	I1217 10:47:31.495317 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.495324 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:31.495332 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:31.495347 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:31.551359 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:31.551380 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:31.568339 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:31.568356 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:31.631156 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:31.622217   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.623260   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.624240   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.625368   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.626068   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:31.622217   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.623260   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.624240   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.625368   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.626068   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:31.631168 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:31.631179 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:31.694344 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:31.694364 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:34.224306 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:34.234549 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:34.234609 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:34.262893 2974151 cri.go:89] found id: ""
	I1217 10:47:34.262907 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.262913 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:34.262919 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:34.262974 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:34.287865 2974151 cri.go:89] found id: ""
	I1217 10:47:34.287880 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.287887 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:34.287892 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:34.287971 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:34.314130 2974151 cri.go:89] found id: ""
	I1217 10:47:34.314144 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.314151 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:34.314157 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:34.314213 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:34.338080 2974151 cri.go:89] found id: ""
	I1217 10:47:34.338094 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.338101 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:34.338106 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:34.338167 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:34.366907 2974151 cri.go:89] found id: ""
	I1217 10:47:34.366922 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.366929 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:34.366934 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:34.367005 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:34.394628 2974151 cri.go:89] found id: ""
	I1217 10:47:34.394642 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.394650 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:34.394655 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:34.394718 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:34.422575 2974151 cri.go:89] found id: ""
	I1217 10:47:34.422590 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.422597 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:34.422605 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:34.422615 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:34.478427 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:34.478445 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:34.495399 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:34.495416 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:34.567591 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:34.559443   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.560218   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.561959   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.562370   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.563927   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:34.559443   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.560218   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.561959   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.562370   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.563927   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:34.567600 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:34.567611 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:34.629987 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:34.630008 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:37.172568 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:37.185167 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:37.185227 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:37.209648 2974151 cri.go:89] found id: ""
	I1217 10:47:37.209662 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.209669 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:37.209674 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:37.209734 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:37.239202 2974151 cri.go:89] found id: ""
	I1217 10:47:37.239216 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.239223 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:37.239229 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:37.239287 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:37.264777 2974151 cri.go:89] found id: ""
	I1217 10:47:37.264791 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.264798 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:37.264803 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:37.264870 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:37.290195 2974151 cri.go:89] found id: ""
	I1217 10:47:37.290209 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.290216 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:37.290221 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:37.290277 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:37.315019 2974151 cri.go:89] found id: ""
	I1217 10:47:37.315033 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.315040 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:37.315046 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:37.315116 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:37.339319 2974151 cri.go:89] found id: ""
	I1217 10:47:37.339333 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.339340 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:37.339345 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:37.339407 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:37.365996 2974151 cri.go:89] found id: ""
	I1217 10:47:37.366010 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.366017 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:37.366024 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:37.366034 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:37.382805 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:37.382824 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:37.447944 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:37.439827   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.440553   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.442220   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.442682   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.444195   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:37.439827   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.440553   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.442220   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.442682   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.444195   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:37.447955 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:37.447966 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:37.510276 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:37.510298 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:37.540200 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:37.540215 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:40.105556 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:40.119775 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:40.119860 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:40.144817 2974151 cri.go:89] found id: ""
	I1217 10:47:40.144832 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.144839 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:40.144844 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:40.144908 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:40.169663 2974151 cri.go:89] found id: ""
	I1217 10:47:40.169676 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.169683 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:40.169688 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:40.169745 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:40.194821 2974151 cri.go:89] found id: ""
	I1217 10:47:40.194835 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.194842 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:40.194847 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:40.194909 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:40.222839 2974151 cri.go:89] found id: ""
	I1217 10:47:40.222853 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.222860 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:40.222866 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:40.222940 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:40.247991 2974151 cri.go:89] found id: ""
	I1217 10:47:40.248005 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.248012 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:40.248017 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:40.248075 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:40.272758 2974151 cri.go:89] found id: ""
	I1217 10:47:40.272772 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.272778 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:40.272783 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:40.272844 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:40.298276 2974151 cri.go:89] found id: ""
	I1217 10:47:40.298290 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.298297 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:40.298305 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:40.298316 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:40.314934 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:40.314950 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:40.379519 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:40.371790   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.372215   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.373688   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.374125   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.375622   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:40.371790   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.372215   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.373688   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.374125   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.375622   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:40.379532 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:40.379544 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:40.442308 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:40.442328 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:40.471269 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:40.471287 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:43.030145 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:43.043645 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:43.043715 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:43.081236 2974151 cri.go:89] found id: ""
	I1217 10:47:43.081250 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.081257 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:43.081262 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:43.081326 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:43.115370 2974151 cri.go:89] found id: ""
	I1217 10:47:43.115384 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.115390 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:43.115399 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:43.115462 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:43.140373 2974151 cri.go:89] found id: ""
	I1217 10:47:43.140387 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.140395 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:43.140400 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:43.140480 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:43.166855 2974151 cri.go:89] found id: ""
	I1217 10:47:43.166870 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.166877 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:43.166883 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:43.166941 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:43.191839 2974151 cri.go:89] found id: ""
	I1217 10:47:43.191854 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.191861 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:43.191866 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:43.191927 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:43.217632 2974151 cri.go:89] found id: ""
	I1217 10:47:43.217652 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.217659 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:43.217664 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:43.217725 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:43.242042 2974151 cri.go:89] found id: ""
	I1217 10:47:43.242056 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.242064 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:43.242071 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:43.242081 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:43.299602 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:43.299621 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:43.316995 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:43.317012 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:43.381195 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:43.373241   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.374026   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.375639   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.375964   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.377408   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:43.373241   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.374026   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.375639   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.375964   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.377408   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:43.381206 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:43.381217 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:43.443981 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:43.444003 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:45.975295 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:45.985580 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:45.985639 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:46.020412 2974151 cri.go:89] found id: ""
	I1217 10:47:46.020446 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.020454 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:46.020460 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:46.020529 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:46.056724 2974151 cri.go:89] found id: ""
	I1217 10:47:46.056739 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.056755 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:46.056762 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:46.056823 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:46.087796 2974151 cri.go:89] found id: ""
	I1217 10:47:46.087811 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.087818 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:46.087844 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:46.087924 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:46.112453 2974151 cri.go:89] found id: ""
	I1217 10:47:46.112467 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.112475 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:46.112480 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:46.112539 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:46.141019 2974151 cri.go:89] found id: ""
	I1217 10:47:46.141034 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.141041 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:46.141047 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:46.141103 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:46.165608 2974151 cri.go:89] found id: ""
	I1217 10:47:46.165621 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.165628 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:46.165634 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:46.165691 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:46.192283 2974151 cri.go:89] found id: ""
	I1217 10:47:46.192307 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.192315 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:46.192323 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:46.192335 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:46.255412 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:46.255435 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:46.287390 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:46.287406 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:46.344424 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:46.344442 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:46.361344 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:46.361361 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:46.424398 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:46.416182   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.416923   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.418495   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.418798   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.420304   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:46.416182   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.416923   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.418495   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.418798   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.420304   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:48.924647 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:48.934813 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:48.934877 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:48.959135 2974151 cri.go:89] found id: ""
	I1217 10:47:48.959159 2974151 logs.go:282] 0 containers: []
	W1217 10:47:48.959166 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:48.959172 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:48.959241 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:48.983610 2974151 cri.go:89] found id: ""
	I1217 10:47:48.983632 2974151 logs.go:282] 0 containers: []
	W1217 10:47:48.983640 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:48.983645 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:48.983714 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:49.026685 2974151 cri.go:89] found id: ""
	I1217 10:47:49.026700 2974151 logs.go:282] 0 containers: []
	W1217 10:47:49.026707 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:49.026713 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:49.026773 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:49.060861 2974151 cri.go:89] found id: ""
	I1217 10:47:49.060876 2974151 logs.go:282] 0 containers: []
	W1217 10:47:49.060883 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:49.060890 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:49.060950 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:49.090198 2974151 cri.go:89] found id: ""
	I1217 10:47:49.090213 2974151 logs.go:282] 0 containers: []
	W1217 10:47:49.090221 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:49.090226 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:49.090288 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:49.119661 2974151 cri.go:89] found id: ""
	I1217 10:47:49.119676 2974151 logs.go:282] 0 containers: []
	W1217 10:47:49.119683 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:49.119689 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:49.119812 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:49.148486 2974151 cri.go:89] found id: ""
	I1217 10:47:49.148500 2974151 logs.go:282] 0 containers: []
	W1217 10:47:49.148507 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:49.148515 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:49.148525 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:49.212250 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:49.212271 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:49.240975 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:49.240993 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:49.299733 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:49.299756 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:49.316863 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:49.316882 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:49.387132 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:49.378625   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.379410   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.381103   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.381692   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.383302   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:49.378625   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.379410   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.381103   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.381692   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.383302   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:51.888132 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:51.898751 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:51.898816 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:51.932795 2974151 cri.go:89] found id: ""
	I1217 10:47:51.932815 2974151 logs.go:282] 0 containers: []
	W1217 10:47:51.932827 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:51.932833 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:51.932896 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:51.963357 2974151 cri.go:89] found id: ""
	I1217 10:47:51.963371 2974151 logs.go:282] 0 containers: []
	W1217 10:47:51.963378 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:51.963384 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:51.963448 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:51.988757 2974151 cri.go:89] found id: ""
	I1217 10:47:51.988778 2974151 logs.go:282] 0 containers: []
	W1217 10:47:51.988785 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:51.988790 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:51.988850 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:52.028153 2974151 cri.go:89] found id: ""
	I1217 10:47:52.028167 2974151 logs.go:282] 0 containers: []
	W1217 10:47:52.028174 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:52.028180 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:52.028244 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:52.063954 2974151 cri.go:89] found id: ""
	I1217 10:47:52.063968 2974151 logs.go:282] 0 containers: []
	W1217 10:47:52.063975 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:52.063980 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:52.064038 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:52.098500 2974151 cri.go:89] found id: ""
	I1217 10:47:52.098514 2974151 logs.go:282] 0 containers: []
	W1217 10:47:52.098521 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:52.098527 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:52.098587 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:52.130345 2974151 cri.go:89] found id: ""
	I1217 10:47:52.130359 2974151 logs.go:282] 0 containers: []
	W1217 10:47:52.130366 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:52.130374 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:52.130384 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:52.189106 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:52.189126 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:52.207475 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:52.207493 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:52.271884 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:52.263636   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.264410   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.265990   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.266498   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.267978   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:52.263636   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.264410   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.265990   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.266498   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.267978   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:52.271903 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:52.271914 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:52.334484 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:52.334504 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:54.867624 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:54.877729 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:54.877789 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:54.902223 2974151 cri.go:89] found id: ""
	I1217 10:47:54.902237 2974151 logs.go:282] 0 containers: []
	W1217 10:47:54.902244 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:54.902250 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:54.902312 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:54.927795 2974151 cri.go:89] found id: ""
	I1217 10:47:54.927810 2974151 logs.go:282] 0 containers: []
	W1217 10:47:54.927817 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:54.927823 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:54.927888 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:54.954800 2974151 cri.go:89] found id: ""
	I1217 10:47:54.954816 2974151 logs.go:282] 0 containers: []
	W1217 10:47:54.954823 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:54.954829 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:54.954888 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:54.980005 2974151 cri.go:89] found id: ""
	I1217 10:47:54.980018 2974151 logs.go:282] 0 containers: []
	W1217 10:47:54.980025 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:54.980030 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:54.980093 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:55.013092 2974151 cri.go:89] found id: ""
	I1217 10:47:55.013107 2974151 logs.go:282] 0 containers: []
	W1217 10:47:55.013115 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:55.013121 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:55.013191 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:55.050531 2974151 cri.go:89] found id: ""
	I1217 10:47:55.050545 2974151 logs.go:282] 0 containers: []
	W1217 10:47:55.050552 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:55.050557 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:55.050619 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:55.090230 2974151 cri.go:89] found id: ""
	I1217 10:47:55.090245 2974151 logs.go:282] 0 containers: []
	W1217 10:47:55.090252 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:55.090260 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:55.090270 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:55.153444 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:55.153464 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:55.185504 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:55.185520 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:55.242466 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:55.242485 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:55.260631 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:55.260648 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:55.331030 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:55.322930   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.323475   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.324828   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.325446   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.327095   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:55.322930   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.323475   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.324828   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.325446   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.327095   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:57.831262 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:57.841170 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:57.841234 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:57.869513 2974151 cri.go:89] found id: ""
	I1217 10:47:57.869529 2974151 logs.go:282] 0 containers: []
	W1217 10:47:57.869536 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:57.869542 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:57.869602 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:57.898410 2974151 cri.go:89] found id: ""
	I1217 10:47:57.898424 2974151 logs.go:282] 0 containers: []
	W1217 10:47:57.898431 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:57.898437 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:57.898497 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:57.926916 2974151 cri.go:89] found id: ""
	I1217 10:47:57.926931 2974151 logs.go:282] 0 containers: []
	W1217 10:47:57.926938 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:57.926944 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:57.927008 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:57.956754 2974151 cri.go:89] found id: ""
	I1217 10:47:57.956768 2974151 logs.go:282] 0 containers: []
	W1217 10:47:57.956775 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:57.956780 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:57.956840 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:57.981614 2974151 cri.go:89] found id: ""
	I1217 10:47:57.981629 2974151 logs.go:282] 0 containers: []
	W1217 10:47:57.981636 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:57.981642 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:57.981701 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:58.021825 2974151 cri.go:89] found id: ""
	I1217 10:47:58.021839 2974151 logs.go:282] 0 containers: []
	W1217 10:47:58.021846 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:58.021852 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:58.021924 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:58.055082 2974151 cri.go:89] found id: ""
	I1217 10:47:58.055097 2974151 logs.go:282] 0 containers: []
	W1217 10:47:58.055104 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:58.055111 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:58.055120 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:58.117865 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:58.117887 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:58.136280 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:58.136297 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:58.204520 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:58.195962   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.196685   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.198489   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.199076   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.200656   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:58.195962   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.196685   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.198489   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.199076   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.200656   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:58.204540 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:58.204551 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:58.267689 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:58.267713 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:00.795803 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:00.807186 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:00.807252 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:00.833048 2974151 cri.go:89] found id: ""
	I1217 10:48:00.833062 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.833069 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:00.833074 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:00.833136 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:00.863311 2974151 cri.go:89] found id: ""
	I1217 10:48:00.863325 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.863332 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:00.863338 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:00.863398 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:00.887857 2974151 cri.go:89] found id: ""
	I1217 10:48:00.887871 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.887877 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:00.887883 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:00.887940 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:00.913735 2974151 cri.go:89] found id: ""
	I1217 10:48:00.913749 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.913756 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:00.913762 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:00.913824 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:00.938305 2974151 cri.go:89] found id: ""
	I1217 10:48:00.938319 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.938327 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:00.938333 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:00.938390 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:00.963900 2974151 cri.go:89] found id: ""
	I1217 10:48:00.963914 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.963920 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:00.963925 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:00.963985 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:00.990708 2974151 cri.go:89] found id: ""
	I1217 10:48:00.990722 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.990729 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:00.990737 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:00.990747 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:01.012006 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:01.012023 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:01.099675 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:01.089770   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.090990   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.092688   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.093302   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.095197   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:01.089770   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.090990   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.092688   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.093302   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.095197   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:01.099686 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:01.099702 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:01.164360 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:01.164381 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:01.194518 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:01.194535 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:03.752593 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:03.763233 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:03.763297 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:03.788873 2974151 cri.go:89] found id: ""
	I1217 10:48:03.788893 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.788901 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:03.788907 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:03.788968 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:03.818571 2974151 cri.go:89] found id: ""
	I1217 10:48:03.818586 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.818593 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:03.818598 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:03.818657 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:03.844383 2974151 cri.go:89] found id: ""
	I1217 10:48:03.844397 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.844405 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:03.844410 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:03.844496 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:03.869318 2974151 cri.go:89] found id: ""
	I1217 10:48:03.869333 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.869339 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:03.869345 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:03.869404 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:03.895029 2974151 cri.go:89] found id: ""
	I1217 10:48:03.895043 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.895050 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:03.895055 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:03.895113 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:03.920493 2974151 cri.go:89] found id: ""
	I1217 10:48:03.920509 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.920516 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:03.920522 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:03.920592 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:03.945885 2974151 cri.go:89] found id: ""
	I1217 10:48:03.945898 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.945905 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:03.945912 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:03.945922 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:04.003008 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:04.003033 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:04.026399 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:04.026416 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:04.107334 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:04.098549   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.099321   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.101190   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.101779   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.103317   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:04.098549   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.099321   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.101190   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.101779   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.103317   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:04.107349 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:04.107360 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:04.174915 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:04.174940 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:06.707611 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:06.718250 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:06.718313 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:06.743084 2974151 cri.go:89] found id: ""
	I1217 10:48:06.743098 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.743105 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:06.743110 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:06.743169 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:06.769923 2974151 cri.go:89] found id: ""
	I1217 10:48:06.769937 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.769945 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:06.769950 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:06.770016 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:06.798634 2974151 cri.go:89] found id: ""
	I1217 10:48:06.798648 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.798655 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:06.798660 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:06.798719 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:06.823901 2974151 cri.go:89] found id: ""
	I1217 10:48:06.823915 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.823923 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:06.823928 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:06.823990 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:06.849872 2974151 cri.go:89] found id: ""
	I1217 10:48:06.849885 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.849892 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:06.849898 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:06.849957 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:06.875558 2974151 cri.go:89] found id: ""
	I1217 10:48:06.875572 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.875580 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:06.875585 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:06.875642 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:06.901051 2974151 cri.go:89] found id: ""
	I1217 10:48:06.901065 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.901071 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:06.901079 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:06.901088 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:06.964468 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:06.964488 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:06.993527 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:06.993542 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:07.062199 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:07.062218 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:07.082316 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:07.082334 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:07.157387 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:07.148299   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.149171   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.150892   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.151645   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.153293   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:07.148299   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.149171   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.150892   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.151645   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.153293   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:09.657640 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:09.667724 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:09.667783 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:09.693919 2974151 cri.go:89] found id: ""
	I1217 10:48:09.693935 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.693941 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:09.693948 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:09.694008 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:09.722743 2974151 cri.go:89] found id: ""
	I1217 10:48:09.722758 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.722765 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:09.722770 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:09.722828 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:09.756610 2974151 cri.go:89] found id: ""
	I1217 10:48:09.756624 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.756632 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:09.756637 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:09.756693 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:09.786006 2974151 cri.go:89] found id: ""
	I1217 10:48:09.786021 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.786028 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:09.786033 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:09.786097 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:09.810865 2974151 cri.go:89] found id: ""
	I1217 10:48:09.810878 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.810885 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:09.810890 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:09.810947 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:09.838221 2974151 cri.go:89] found id: ""
	I1217 10:48:09.838235 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.838242 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:09.838247 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:09.838307 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:09.866748 2974151 cri.go:89] found id: ""
	I1217 10:48:09.866762 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.866769 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:09.866776 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:09.866786 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:09.929554 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:09.929576 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:09.959017 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:09.959032 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:10.017246 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:10.017265 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:10.036170 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:10.036188 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:10.112138 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:10.102458   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.103256   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.105102   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.105527   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.107946   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:10.102458   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.103256   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.105102   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.105527   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.107946   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:12.612434 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:12.622568 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:12.622628 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:12.650041 2974151 cri.go:89] found id: ""
	I1217 10:48:12.650061 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.650069 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:12.650074 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:12.650134 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:12.674422 2974151 cri.go:89] found id: ""
	I1217 10:48:12.674437 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.674444 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:12.674450 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:12.674509 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:12.703294 2974151 cri.go:89] found id: ""
	I1217 10:48:12.703308 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.703315 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:12.703320 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:12.703378 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:12.727986 2974151 cri.go:89] found id: ""
	I1217 10:48:12.728006 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.728013 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:12.728019 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:12.728078 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:12.753787 2974151 cri.go:89] found id: ""
	I1217 10:48:12.753800 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.753807 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:12.753812 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:12.753869 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:12.779807 2974151 cri.go:89] found id: ""
	I1217 10:48:12.779831 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.779838 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:12.779844 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:12.779904 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:12.806196 2974151 cri.go:89] found id: ""
	I1217 10:48:12.806211 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.806219 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:12.806227 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:12.806237 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:12.862792 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:12.862812 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:12.879906 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:12.879923 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:12.944306 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:12.935386   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.935978   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.937685   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.938348   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.940016   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:12.935386   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.935978   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.937685   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.938348   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.940016   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:12.944316 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:12.944327 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:13.006787 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:13.006812 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:15.546753 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:15.557080 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:15.557147 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:15.582296 2974151 cri.go:89] found id: ""
	I1217 10:48:15.582309 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.582316 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:15.582321 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:15.582378 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:15.609992 2974151 cri.go:89] found id: ""
	I1217 10:48:15.610006 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.610013 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:15.610018 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:15.610075 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:15.635702 2974151 cri.go:89] found id: ""
	I1217 10:48:15.635716 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.635723 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:15.635728 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:15.635788 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:15.661568 2974151 cri.go:89] found id: ""
	I1217 10:48:15.661582 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.661589 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:15.661595 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:15.661652 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:15.691028 2974151 cri.go:89] found id: ""
	I1217 10:48:15.691042 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.691049 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:15.691056 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:15.691114 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:15.715986 2974151 cri.go:89] found id: ""
	I1217 10:48:15.716009 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.716018 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:15.716023 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:15.716088 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:15.742377 2974151 cri.go:89] found id: ""
	I1217 10:48:15.742391 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.742398 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:15.742406 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:15.742417 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:15.759230 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:15.759248 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:15.824478 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:15.816058   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.816539   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.818127   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.818799   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.820350   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:15.816058   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.816539   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.818127   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.818799   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.820350   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:15.824490 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:15.824502 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:15.892784 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:15.892804 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:15.921547 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:15.921562 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:18.478009 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:18.488179 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:18.488242 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:18.511813 2974151 cri.go:89] found id: ""
	I1217 10:48:18.511827 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.511843 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:18.511850 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:18.511929 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:18.535876 2974151 cri.go:89] found id: ""
	I1217 10:48:18.535890 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.535897 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:18.535902 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:18.535957 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:18.560498 2974151 cri.go:89] found id: ""
	I1217 10:48:18.560512 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.560521 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:18.560526 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:18.560588 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:18.585005 2974151 cri.go:89] found id: ""
	I1217 10:48:18.585018 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.585025 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:18.585030 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:18.585087 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:18.609132 2974151 cri.go:89] found id: ""
	I1217 10:48:18.609146 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.609153 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:18.609158 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:18.609215 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:18.640158 2974151 cri.go:89] found id: ""
	I1217 10:48:18.640172 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.640187 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:18.640194 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:18.640266 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:18.669845 2974151 cri.go:89] found id: ""
	I1217 10:48:18.669860 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.669867 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:18.669874 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:18.669884 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:18.726133 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:18.726154 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:18.743323 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:18.743341 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:18.807202 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:18.798544   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.799121   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.800756   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.801823   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.803369   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:18.798544   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.799121   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.800756   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.801823   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.803369   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:18.807212 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:18.807222 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:18.869437 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:18.869456 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:21.398466 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:21.408899 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:21.408973 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:21.433836 2974151 cri.go:89] found id: ""
	I1217 10:48:21.433851 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.433858 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:21.433863 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:21.433925 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:21.458440 2974151 cri.go:89] found id: ""
	I1217 10:48:21.458455 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.458462 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:21.458473 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:21.458531 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:21.482664 2974151 cri.go:89] found id: ""
	I1217 10:48:21.482678 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.482685 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:21.482690 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:21.482747 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:21.510499 2974151 cri.go:89] found id: ""
	I1217 10:48:21.510513 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.510520 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:21.510525 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:21.510583 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:21.541182 2974151 cri.go:89] found id: ""
	I1217 10:48:21.541196 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.541204 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:21.541210 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:21.541268 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:21.565692 2974151 cri.go:89] found id: ""
	I1217 10:48:21.565705 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.565717 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:21.565723 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:21.565781 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:21.589704 2974151 cri.go:89] found id: ""
	I1217 10:48:21.589718 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.589725 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:21.589733 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:21.589743 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:21.651127 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:21.642467   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.643175   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.644846   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.645439   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.647213   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:21.642467   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.643175   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.644846   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.645439   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.647213   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:21.651137 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:21.651153 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:21.714087 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:21.714110 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:21.743190 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:21.743205 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:21.803426 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:21.803446 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:24.321453 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:24.331883 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:24.331948 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:24.356312 2974151 cri.go:89] found id: ""
	I1217 10:48:24.356327 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.356334 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:24.356340 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:24.356398 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:24.382382 2974151 cri.go:89] found id: ""
	I1217 10:48:24.382395 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.382402 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:24.382407 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:24.382466 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:24.410304 2974151 cri.go:89] found id: ""
	I1217 10:48:24.410318 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.410325 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:24.410330 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:24.410387 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:24.434459 2974151 cri.go:89] found id: ""
	I1217 10:48:24.434474 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.434481 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:24.434486 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:24.434551 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:24.459866 2974151 cri.go:89] found id: ""
	I1217 10:48:24.459881 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.459888 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:24.459893 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:24.459989 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:24.486458 2974151 cri.go:89] found id: ""
	I1217 10:48:24.486471 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.486478 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:24.486484 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:24.486548 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:24.511349 2974151 cri.go:89] found id: ""
	I1217 10:48:24.511363 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.511372 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:24.511379 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:24.511390 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:24.575296 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:24.566670   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.567433   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.569106   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.569660   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.571348   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:24.566670   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.567433   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.569106   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.569660   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.571348   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:24.575314 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:24.575325 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:24.637043 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:24.637063 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:24.665459 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:24.665475 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:24.722699 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:24.722722 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:27.240739 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:27.252359 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:27.252432 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:27.280163 2974151 cri.go:89] found id: ""
	I1217 10:48:27.280177 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.280196 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:27.280201 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:27.280266 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:27.309589 2974151 cri.go:89] found id: ""
	I1217 10:48:27.309603 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.309622 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:27.309627 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:27.309692 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:27.337538 2974151 cri.go:89] found id: ""
	I1217 10:48:27.337552 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.337559 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:27.337564 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:27.337622 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:27.361942 2974151 cri.go:89] found id: ""
	I1217 10:48:27.361957 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.361965 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:27.361970 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:27.362029 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:27.390818 2974151 cri.go:89] found id: ""
	I1217 10:48:27.390832 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.390840 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:27.390845 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:27.390908 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:27.422856 2974151 cri.go:89] found id: ""
	I1217 10:48:27.422871 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.422878 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:27.422883 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:27.422943 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:27.448978 2974151 cri.go:89] found id: ""
	I1217 10:48:27.448992 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.448999 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:27.449007 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:27.449016 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:27.504505 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:27.504523 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:27.521306 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:27.521327 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:27.585173 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:27.576673   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.577398   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.579125   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.579750   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.581292   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:27.576673   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.577398   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.579125   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.579750   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.581292   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:27.585182 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:27.585193 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:27.646817 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:27.646836 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:30.175129 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:30.186313 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:30.186377 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:30.213450 2974151 cri.go:89] found id: ""
	I1217 10:48:30.213464 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.213471 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:30.213476 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:30.213541 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:30.240025 2974151 cri.go:89] found id: ""
	I1217 10:48:30.240039 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.240046 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:30.240051 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:30.240126 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:30.286752 2974151 cri.go:89] found id: ""
	I1217 10:48:30.286766 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.286774 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:30.286779 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:30.286858 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:30.317210 2974151 cri.go:89] found id: ""
	I1217 10:48:30.317232 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.317240 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:30.317245 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:30.317305 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:30.345461 2974151 cri.go:89] found id: ""
	I1217 10:48:30.345475 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.345482 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:30.345487 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:30.345546 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:30.375558 2974151 cri.go:89] found id: ""
	I1217 10:48:30.375576 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.375590 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:30.375595 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:30.375655 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:30.401652 2974151 cri.go:89] found id: ""
	I1217 10:48:30.401668 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.401675 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:30.401683 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:30.401693 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:30.462370 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:30.462393 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:30.480350 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:30.480366 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:30.545595 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:30.536885   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.537607   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.539274   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.539750   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.541271   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:30.536885   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.537607   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.539274   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.539750   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.541271   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:30.545607 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:30.545619 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:30.609333 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:30.609353 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:33.138648 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:33.149215 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:33.149282 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:33.174735 2974151 cri.go:89] found id: ""
	I1217 10:48:33.174755 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.174764 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:33.174769 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:33.174832 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:33.200480 2974151 cri.go:89] found id: ""
	I1217 10:48:33.200495 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.200502 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:33.200507 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:33.200567 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:33.230102 2974151 cri.go:89] found id: ""
	I1217 10:48:33.230117 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.230124 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:33.230129 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:33.230186 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:33.273250 2974151 cri.go:89] found id: ""
	I1217 10:48:33.273264 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.273271 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:33.273278 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:33.273336 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:33.304262 2974151 cri.go:89] found id: ""
	I1217 10:48:33.304276 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.304293 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:33.304299 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:33.304359 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:33.332160 2974151 cri.go:89] found id: ""
	I1217 10:48:33.332174 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.332181 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:33.332186 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:33.332247 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:33.357270 2974151 cri.go:89] found id: ""
	I1217 10:48:33.357284 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.357291 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:33.357299 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:33.357308 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:33.420730 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:33.420751 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:33.448992 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:33.449007 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:33.504960 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:33.504979 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:33.521896 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:33.521913 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:33.584222 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:33.575275   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.576061   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.577717   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.578259   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.580086   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:33.575275   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.576061   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.577717   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.578259   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.580086   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:36.084525 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:36.095613 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:36.095678 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:36.121922 2974151 cri.go:89] found id: ""
	I1217 10:48:36.121936 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.121944 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:36.121950 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:36.122009 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:36.150594 2974151 cri.go:89] found id: ""
	I1217 10:48:36.150608 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.150616 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:36.150621 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:36.150682 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:36.179197 2974151 cri.go:89] found id: ""
	I1217 10:48:36.179210 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.179218 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:36.179223 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:36.179283 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:36.203527 2974151 cri.go:89] found id: ""
	I1217 10:48:36.203541 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.203548 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:36.203553 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:36.203620 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:36.228332 2974151 cri.go:89] found id: ""
	I1217 10:48:36.228345 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.228352 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:36.228358 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:36.228456 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:36.262749 2974151 cri.go:89] found id: ""
	I1217 10:48:36.262763 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.262769 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:36.262774 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:36.262834 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:36.300340 2974151 cri.go:89] found id: ""
	I1217 10:48:36.300353 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.300363 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:36.300371 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:36.300380 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:36.358709 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:36.358729 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:36.375631 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:36.375649 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:36.440551 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:36.432145   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.432737   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.434406   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.434949   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.436697   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:36.432145   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.432737   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.434406   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.434949   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.436697   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:36.440560 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:36.440571 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:36.502941 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:36.502960 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:39.031727 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:39.042285 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:39.042350 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:39.068264 2974151 cri.go:89] found id: ""
	I1217 10:48:39.068278 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.068285 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:39.068291 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:39.068352 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:39.091732 2974151 cri.go:89] found id: ""
	I1217 10:48:39.091745 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.091752 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:39.091757 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:39.091815 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:39.118106 2974151 cri.go:89] found id: ""
	I1217 10:48:39.118119 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.118126 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:39.118133 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:39.118189 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:39.146834 2974151 cri.go:89] found id: ""
	I1217 10:48:39.146848 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.146856 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:39.146861 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:39.146919 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:39.175980 2974151 cri.go:89] found id: ""
	I1217 10:48:39.175994 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.176001 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:39.176006 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:39.176069 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:39.201501 2974151 cri.go:89] found id: ""
	I1217 10:48:39.201515 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.201522 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:39.201527 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:39.201582 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:39.226802 2974151 cri.go:89] found id: ""
	I1217 10:48:39.226816 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.226833 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:39.226841 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:39.226852 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:39.283913 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:39.283931 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:39.304511 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:39.304528 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:39.377031 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:39.368579   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.369282   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.370783   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.371295   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.372809   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:39.368579   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.369282   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.370783   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.371295   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.372809   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:39.377044 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:39.377059 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:39.440871 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:39.440891 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:41.970682 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:41.981109 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:41.981168 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:42.014790 2974151 cri.go:89] found id: ""
	I1217 10:48:42.014806 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.014813 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:42.014820 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:42.014890 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:42.044163 2974151 cri.go:89] found id: ""
	I1217 10:48:42.044177 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.044183 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:42.044188 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:42.044247 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:42.074548 2974151 cri.go:89] found id: ""
	I1217 10:48:42.074581 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.074595 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:42.074605 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:42.074707 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:42.108730 2974151 cri.go:89] found id: ""
	I1217 10:48:42.108755 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.108763 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:42.108769 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:42.108838 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:42.140974 2974151 cri.go:89] found id: ""
	I1217 10:48:42.140989 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.140997 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:42.141002 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:42.141075 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:42.185841 2974151 cri.go:89] found id: ""
	I1217 10:48:42.185857 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.185865 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:42.185871 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:42.185940 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:42.227621 2974151 cri.go:89] found id: ""
	I1217 10:48:42.227637 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.227645 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:42.227654 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:42.227664 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:42.293458 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:42.293479 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:42.316925 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:42.316945 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:42.388580 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:42.379787   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.380216   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.381959   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.382335   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.383960   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:42.379787   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.380216   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.381959   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.382335   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.383960   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:42.388600 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:42.388612 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:42.451727 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:42.451749 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:44.984590 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:44.995270 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:44.995356 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:45.061848 2974151 cri.go:89] found id: ""
	I1217 10:48:45.061864 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.061871 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:45.061878 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:45.061944 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:45.120144 2974151 cri.go:89] found id: ""
	I1217 10:48:45.120160 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.120168 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:45.120174 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:45.120245 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:45.160210 2974151 cri.go:89] found id: ""
	I1217 10:48:45.160226 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.160235 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:45.160240 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:45.160314 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:45.221796 2974151 cri.go:89] found id: ""
	I1217 10:48:45.221829 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.221858 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:45.221880 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:45.222024 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:45.304676 2974151 cri.go:89] found id: ""
	I1217 10:48:45.304703 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.304711 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:45.304717 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:45.304788 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:45.337767 2974151 cri.go:89] found id: ""
	I1217 10:48:45.337790 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.337798 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:45.337804 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:45.337871 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:45.373372 2974151 cri.go:89] found id: ""
	I1217 10:48:45.373387 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.373394 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:45.373402 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:45.373412 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:45.433269 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:45.433288 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:45.450287 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:45.450304 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:45.517643 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:45.508647   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.509270   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.511035   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.511639   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.513218   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:45.508647   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.509270   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.511035   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.511639   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.513218   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:45.517653 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:45.517665 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:45.581750 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:45.581771 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:48.117070 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:48.128197 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:48.128258 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:48.153434 2974151 cri.go:89] found id: ""
	I1217 10:48:48.153449 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.153455 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:48.153461 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:48.153520 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:48.178677 2974151 cri.go:89] found id: ""
	I1217 10:48:48.178691 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.178698 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:48.178703 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:48.178766 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:48.206864 2974151 cri.go:89] found id: ""
	I1217 10:48:48.206879 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.206886 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:48.206891 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:48.206957 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:48.231924 2974151 cri.go:89] found id: ""
	I1217 10:48:48.231938 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.231945 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:48.231950 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:48.232008 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:48.274705 2974151 cri.go:89] found id: ""
	I1217 10:48:48.274718 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.274726 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:48.274731 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:48.274790 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:48.303863 2974151 cri.go:89] found id: ""
	I1217 10:48:48.303877 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.303884 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:48.303889 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:48.303950 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:48.328838 2974151 cri.go:89] found id: ""
	I1217 10:48:48.328852 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.328859 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:48.328867 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:48.328878 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:48.389442 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:48.389462 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:48.406684 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:48.406700 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:48.472922 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:48.463986   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.464483   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.466011   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.466510   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.468051   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:48.463986   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.464483   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.466011   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.466510   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.468051   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:48.472932 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:48.472943 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:48.535655 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:48.535674 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:51.069071 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:51.081466 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:51.081531 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:51.111124 2974151 cri.go:89] found id: ""
	I1217 10:48:51.111139 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.111146 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:51.111152 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:51.111218 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:51.143791 2974151 cri.go:89] found id: ""
	I1217 10:48:51.143806 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.143813 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:51.143818 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:51.143881 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:51.169640 2974151 cri.go:89] found id: ""
	I1217 10:48:51.169655 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.169661 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:51.169666 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:51.169726 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:51.195027 2974151 cri.go:89] found id: ""
	I1217 10:48:51.195041 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.195048 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:51.195053 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:51.195115 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:51.219317 2974151 cri.go:89] found id: ""
	I1217 10:48:51.219330 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.219337 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:51.219342 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:51.219401 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:51.246522 2974151 cri.go:89] found id: ""
	I1217 10:48:51.246536 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.246543 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:51.246548 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:51.246606 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:51.277021 2974151 cri.go:89] found id: ""
	I1217 10:48:51.277047 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.277055 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:51.277064 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:51.277074 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:51.345341 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:51.345364 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:51.378677 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:51.378693 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:51.438850 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:51.438869 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:51.455900 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:51.455916 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:51.516892 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:51.508779   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.509483   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.510624   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.511147   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.512798   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:51.508779   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.509483   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.510624   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.511147   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.512798   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:54.017193 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:54.028476 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:54.028544 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:54.056997 2974151 cri.go:89] found id: ""
	I1217 10:48:54.057012 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.057019 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:54.057025 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:54.057086 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:54.083159 2974151 cri.go:89] found id: ""
	I1217 10:48:54.083175 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.083183 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:54.083189 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:54.083251 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:54.109519 2974151 cri.go:89] found id: ""
	I1217 10:48:54.109534 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.109549 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:54.109557 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:54.109624 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:54.134157 2974151 cri.go:89] found id: ""
	I1217 10:48:54.134171 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.134178 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:54.134183 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:54.134239 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:54.162788 2974151 cri.go:89] found id: ""
	I1217 10:48:54.162802 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.162819 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:54.162825 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:54.162894 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:54.189731 2974151 cri.go:89] found id: ""
	I1217 10:48:54.189749 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.189756 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:54.189762 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:54.189850 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:54.214954 2974151 cri.go:89] found id: ""
	I1217 10:48:54.214968 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.214975 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:54.214982 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:54.214992 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:54.232128 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:54.232145 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:54.332775 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:54.323643   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.324176   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.325741   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.326329   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.328065   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:54.323643   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.324176   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.325741   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.326329   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.328065   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:54.332784 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:54.332794 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:54.400873 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:54.400902 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:54.436837 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:54.436855 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:56.995650 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:57.014000 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:57.014068 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:57.039621 2974151 cri.go:89] found id: ""
	I1217 10:48:57.039635 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.039642 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:57.039647 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:57.039706 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:57.063811 2974151 cri.go:89] found id: ""
	I1217 10:48:57.063824 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.063832 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:57.063837 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:57.063901 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:57.089763 2974151 cri.go:89] found id: ""
	I1217 10:48:57.089777 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.089784 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:57.089789 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:57.089849 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:57.119137 2974151 cri.go:89] found id: ""
	I1217 10:48:57.119151 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.119157 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:57.119163 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:57.119222 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:57.145301 2974151 cri.go:89] found id: ""
	I1217 10:48:57.145317 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.145324 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:57.145330 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:57.145390 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:57.169967 2974151 cri.go:89] found id: ""
	I1217 10:48:57.169981 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.169989 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:57.169994 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:57.170055 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:57.199678 2974151 cri.go:89] found id: ""
	I1217 10:48:57.199693 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.199700 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:57.199708 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:57.199718 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:57.259994 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:57.260013 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:57.283244 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:57.283262 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:57.355664 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:57.347248   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.348013   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.349816   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.350323   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.351848   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:57.347248   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.348013   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.349816   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.350323   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.351848   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:57.355675 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:57.355686 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:57.418570 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:57.418593 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:59.953153 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:59.963676 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:59.963736 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:59.989636 2974151 cri.go:89] found id: ""
	I1217 10:48:59.989654 2974151 logs.go:282] 0 containers: []
	W1217 10:48:59.989662 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:59.989667 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:59.989734 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:00.158254 2974151 cri.go:89] found id: ""
	I1217 10:49:00.158276 2974151 logs.go:282] 0 containers: []
	W1217 10:49:00.158284 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:00.158290 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:00.158371 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:00.272664 2974151 cri.go:89] found id: ""
	I1217 10:49:00.272680 2974151 logs.go:282] 0 containers: []
	W1217 10:49:00.272687 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:00.272693 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:00.272790 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:00.329030 2974151 cri.go:89] found id: ""
	I1217 10:49:00.329045 2974151 logs.go:282] 0 containers: []
	W1217 10:49:00.329052 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:00.329058 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:00.329123 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:00.376045 2974151 cri.go:89] found id: ""
	I1217 10:49:00.376060 2974151 logs.go:282] 0 containers: []
	W1217 10:49:00.376068 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:00.376074 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:00.376141 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:00.406187 2974151 cri.go:89] found id: ""
	I1217 10:49:00.406202 2974151 logs.go:282] 0 containers: []
	W1217 10:49:00.406210 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:00.406216 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:00.406281 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:00.436523 2974151 cri.go:89] found id: ""
	I1217 10:49:00.436538 2974151 logs.go:282] 0 containers: []
	W1217 10:49:00.436546 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:00.436554 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:00.436575 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:00.504375 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:00.495726   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.496591   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.498206   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.498541   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.500005   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:00.495726   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.496591   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.498206   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.498541   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.500005   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:00.504450 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:00.504460 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:00.568543 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:00.568563 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:00.600756 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:00.600773 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:00.662114 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:00.662131 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:03.181138 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:03.191733 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:03.191796 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:03.220693 2974151 cri.go:89] found id: ""
	I1217 10:49:03.220707 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.220714 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:03.220719 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:03.220775 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:03.245346 2974151 cri.go:89] found id: ""
	I1217 10:49:03.245359 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.245366 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:03.245371 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:03.245434 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:03.283019 2974151 cri.go:89] found id: ""
	I1217 10:49:03.283034 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.283042 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:03.283072 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:03.283134 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:03.312584 2974151 cri.go:89] found id: ""
	I1217 10:49:03.312599 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.312605 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:03.312611 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:03.312670 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:03.337325 2974151 cri.go:89] found id: ""
	I1217 10:49:03.337340 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.337347 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:03.337352 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:03.337421 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:03.363072 2974151 cri.go:89] found id: ""
	I1217 10:49:03.363086 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.363093 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:03.363099 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:03.363156 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:03.388307 2974151 cri.go:89] found id: ""
	I1217 10:49:03.388321 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.388328 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:03.388336 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:03.388346 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:03.450591 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:03.450611 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:03.479831 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:03.479848 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:03.538921 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:03.538940 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:03.557193 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:03.557210 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:03.629818 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:03.620815   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.621960   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.622408   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.623908   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.624403   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:03.620815   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.621960   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.622408   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.623908   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.624403   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:06.130079 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:06.140562 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:06.140625 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:06.176078 2974151 cri.go:89] found id: ""
	I1217 10:49:06.176092 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.176100 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:06.176106 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:06.176165 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:06.201648 2974151 cri.go:89] found id: ""
	I1217 10:49:06.201669 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.201678 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:06.201683 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:06.201741 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:06.225531 2974151 cri.go:89] found id: ""
	I1217 10:49:06.225545 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.225552 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:06.225557 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:06.225615 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:06.252027 2974151 cri.go:89] found id: ""
	I1217 10:49:06.252042 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.252049 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:06.252056 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:06.252118 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:06.280340 2974151 cri.go:89] found id: ""
	I1217 10:49:06.280353 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.280361 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:06.280366 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:06.280449 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:06.313759 2974151 cri.go:89] found id: ""
	I1217 10:49:06.313773 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.313781 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:06.313786 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:06.313846 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:06.338616 2974151 cri.go:89] found id: ""
	I1217 10:49:06.338630 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.338638 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:06.338645 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:06.338655 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:06.394759 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:06.394784 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:06.412192 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:06.412208 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:06.475020 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:06.466865   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.467591   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.469274   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.469719   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.471184   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:06.466865   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.467591   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.469274   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.469719   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.471184   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:06.475030 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:06.475039 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:06.537503 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:06.537522 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:09.067381 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:09.078169 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:09.078242 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:09.102188 2974151 cri.go:89] found id: ""
	I1217 10:49:09.102202 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.102210 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:09.102215 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:09.102276 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:09.127428 2974151 cri.go:89] found id: ""
	I1217 10:49:09.127443 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.127457 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:09.127462 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:09.127523 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:09.155928 2974151 cri.go:89] found id: ""
	I1217 10:49:09.155943 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.155951 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:09.155956 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:09.156013 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:09.180962 2974151 cri.go:89] found id: ""
	I1217 10:49:09.180976 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.180983 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:09.180988 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:09.181047 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:09.206446 2974151 cri.go:89] found id: ""
	I1217 10:49:09.206459 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.206466 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:09.206471 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:09.206527 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:09.234163 2974151 cri.go:89] found id: ""
	I1217 10:49:09.234177 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.234184 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:09.234191 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:09.234248 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:09.266062 2974151 cri.go:89] found id: ""
	I1217 10:49:09.266076 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.266083 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:09.266091 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:09.266100 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:09.331047 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:09.331068 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:09.348066 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:09.348082 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:09.416466 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:09.408138   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.408821   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.410542   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.410884   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.412400   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:09.408138   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.408821   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.410542   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.410884   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.412400   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:09.416475 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:09.416488 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:09.477634 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:09.477656 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:12.006559 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:12.017999 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:12.018064 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:12.043667 2974151 cri.go:89] found id: ""
	I1217 10:49:12.043681 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.043689 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:12.043694 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:12.043755 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:12.067975 2974151 cri.go:89] found id: ""
	I1217 10:49:12.068000 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.068008 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:12.068013 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:12.068082 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:12.093913 2974151 cri.go:89] found id: ""
	I1217 10:49:12.093936 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.093944 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:12.093950 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:12.094011 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:12.123009 2974151 cri.go:89] found id: ""
	I1217 10:49:12.123022 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.123029 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:12.123046 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:12.123121 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:12.152263 2974151 cri.go:89] found id: ""
	I1217 10:49:12.152277 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.152284 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:12.152299 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:12.152357 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:12.178500 2974151 cri.go:89] found id: ""
	I1217 10:49:12.178514 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.178521 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:12.178527 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:12.178601 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:12.203660 2974151 cri.go:89] found id: ""
	I1217 10:49:12.203674 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.203692 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:12.203700 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:12.203711 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:12.261019 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:12.261039 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:12.279774 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:12.279790 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:12.350172 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:12.342156   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.342650   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.344118   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.344659   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.346217   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:12.342156   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.342650   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.344118   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.344659   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.346217   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:12.350182 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:12.350192 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:12.412715 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:12.412734 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:14.942372 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:14.953073 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:14.953133 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:14.986889 2974151 cri.go:89] found id: ""
	I1217 10:49:14.986903 2974151 logs.go:282] 0 containers: []
	W1217 10:49:14.986910 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:14.986916 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:14.987012 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:15.024956 2974151 cri.go:89] found id: ""
	I1217 10:49:15.024972 2974151 logs.go:282] 0 containers: []
	W1217 10:49:15.024980 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:15.024986 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:15.025062 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:15.055135 2974151 cri.go:89] found id: ""
	I1217 10:49:15.055159 2974151 logs.go:282] 0 containers: []
	W1217 10:49:15.055170 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:15.055175 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:15.055244 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:15.083268 2974151 cri.go:89] found id: ""
	I1217 10:49:15.083283 2974151 logs.go:282] 0 containers: []
	W1217 10:49:15.083310 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:15.083316 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:15.083386 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:15.110734 2974151 cri.go:89] found id: ""
	I1217 10:49:15.110750 2974151 logs.go:282] 0 containers: []
	W1217 10:49:15.110757 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:15.110764 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:15.110825 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:15.140854 2974151 cri.go:89] found id: ""
	I1217 10:49:15.140869 2974151 logs.go:282] 0 containers: []
	W1217 10:49:15.140876 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:15.140881 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:15.140981 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:15.167259 2974151 cri.go:89] found id: ""
	I1217 10:49:15.167273 2974151 logs.go:282] 0 containers: []
	W1217 10:49:15.167280 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:15.167288 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:15.167298 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:15.224081 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:15.224100 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:15.241661 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:15.241679 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:15.322485 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:15.313320   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.313943   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.316017   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.316658   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.318128   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:15.313320   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.313943   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.316017   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.316658   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.318128   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:15.322495 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:15.322517 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:15.385975 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:15.385996 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:17.915565 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:17.925558 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:17.925619 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:17.950881 2974151 cri.go:89] found id: ""
	I1217 10:49:17.950895 2974151 logs.go:282] 0 containers: []
	W1217 10:49:17.950902 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:17.950907 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:17.950964 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:17.975955 2974151 cri.go:89] found id: ""
	I1217 10:49:17.975969 2974151 logs.go:282] 0 containers: []
	W1217 10:49:17.975975 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:17.975980 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:17.976039 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:18.004484 2974151 cri.go:89] found id: ""
	I1217 10:49:18.004503 2974151 logs.go:282] 0 containers: []
	W1217 10:49:18.004512 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:18.004517 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:18.004597 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:18.031679 2974151 cri.go:89] found id: ""
	I1217 10:49:18.031694 2974151 logs.go:282] 0 containers: []
	W1217 10:49:18.031702 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:18.031708 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:18.031775 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:18.059398 2974151 cri.go:89] found id: ""
	I1217 10:49:18.059412 2974151 logs.go:282] 0 containers: []
	W1217 10:49:18.059436 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:18.059443 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:18.059504 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:18.085330 2974151 cri.go:89] found id: ""
	I1217 10:49:18.085344 2974151 logs.go:282] 0 containers: []
	W1217 10:49:18.085352 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:18.085357 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:18.085420 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:18.114569 2974151 cri.go:89] found id: ""
	I1217 10:49:18.114585 2974151 logs.go:282] 0 containers: []
	W1217 10:49:18.114592 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:18.114600 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:18.114611 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:18.178110 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:18.169772   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.170633   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.172208   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.172731   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.174231   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:18.169772   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.170633   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.172208   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.172731   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.174231   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:18.178122 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:18.178132 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:18.241410 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:18.241434 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:18.273882 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:18.273898 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:18.334306 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:18.334324 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:20.852121 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:20.862188 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:20.862248 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:20.886819 2974151 cri.go:89] found id: ""
	I1217 10:49:20.886834 2974151 logs.go:282] 0 containers: []
	W1217 10:49:20.886850 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:20.886857 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:20.886930 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:20.913071 2974151 cri.go:89] found id: ""
	I1217 10:49:20.913086 2974151 logs.go:282] 0 containers: []
	W1217 10:49:20.913093 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:20.913098 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:20.913157 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:20.937301 2974151 cri.go:89] found id: ""
	I1217 10:49:20.937315 2974151 logs.go:282] 0 containers: []
	W1217 10:49:20.937322 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:20.937327 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:20.937386 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:20.966247 2974151 cri.go:89] found id: ""
	I1217 10:49:20.966260 2974151 logs.go:282] 0 containers: []
	W1217 10:49:20.966267 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:20.966272 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:20.966328 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:20.991713 2974151 cri.go:89] found id: ""
	I1217 10:49:20.991727 2974151 logs.go:282] 0 containers: []
	W1217 10:49:20.991734 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:20.991739 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:20.991796 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:21.017813 2974151 cri.go:89] found id: ""
	I1217 10:49:21.017828 2974151 logs.go:282] 0 containers: []
	W1217 10:49:21.017835 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:21.017841 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:21.017901 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:21.047576 2974151 cri.go:89] found id: ""
	I1217 10:49:21.047590 2974151 logs.go:282] 0 containers: []
	W1217 10:49:21.047598 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:21.047605 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:21.047615 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:21.109681 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:21.109707 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:21.127095 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:21.127114 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:21.192482 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:21.184199   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.184777   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.186485   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.186953   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.188551   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:21.184199   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.184777   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.186485   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.186953   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.188551   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:21.192493 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:21.192504 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:21.256363 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:21.256383 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:23.824987 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:23.835117 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:23.835179 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:23.860953 2974151 cri.go:89] found id: ""
	I1217 10:49:23.860966 2974151 logs.go:282] 0 containers: []
	W1217 10:49:23.860973 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:23.860979 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:23.861036 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:23.894776 2974151 cri.go:89] found id: ""
	I1217 10:49:23.894790 2974151 logs.go:282] 0 containers: []
	W1217 10:49:23.894797 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:23.894802 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:23.894863 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:23.923645 2974151 cri.go:89] found id: ""
	I1217 10:49:23.923660 2974151 logs.go:282] 0 containers: []
	W1217 10:49:23.923667 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:23.923678 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:23.923735 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:23.950354 2974151 cri.go:89] found id: ""
	I1217 10:49:23.950368 2974151 logs.go:282] 0 containers: []
	W1217 10:49:23.950374 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:23.950380 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:23.950437 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:23.974645 2974151 cri.go:89] found id: ""
	I1217 10:49:23.974659 2974151 logs.go:282] 0 containers: []
	W1217 10:49:23.974666 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:23.974671 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:23.974732 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:24.000121 2974151 cri.go:89] found id: ""
	I1217 10:49:24.000149 2974151 logs.go:282] 0 containers: []
	W1217 10:49:24.000157 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:24.000163 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:24.000242 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:24.034475 2974151 cri.go:89] found id: ""
	I1217 10:49:24.034489 2974151 logs.go:282] 0 containers: []
	W1217 10:49:24.034497 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:24.034505 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:24.034514 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:24.099963 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:24.099984 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:24.136430 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:24.136447 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:24.192589 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:24.192651 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:24.209690 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:24.209707 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:24.292778 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:24.284539   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.285387   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.287069   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.287380   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.288843   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:24.284539   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.285387   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.287069   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.287380   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.288843   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:26.793038 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:26.803569 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:26.803630 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:26.829202 2974151 cri.go:89] found id: ""
	I1217 10:49:26.829215 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.829222 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:26.829227 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:26.829285 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:26.855339 2974151 cri.go:89] found id: ""
	I1217 10:49:26.855353 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.855359 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:26.855365 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:26.855434 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:26.882145 2974151 cri.go:89] found id: ""
	I1217 10:49:26.882160 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.882168 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:26.882174 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:26.882231 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:26.906912 2974151 cri.go:89] found id: ""
	I1217 10:49:26.906925 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.906932 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:26.906937 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:26.906994 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:26.931691 2974151 cri.go:89] found id: ""
	I1217 10:49:26.931714 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.931722 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:26.931732 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:26.931798 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:26.957483 2974151 cri.go:89] found id: ""
	I1217 10:49:26.957497 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.957504 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:26.957510 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:26.957570 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:26.981546 2974151 cri.go:89] found id: ""
	I1217 10:49:26.981560 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.981567 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:26.981574 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:26.981584 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:27.038884 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:27.038905 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:27.059063 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:27.059079 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:27.122721 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:27.114006   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.114575   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.116274   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.117079   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.118797   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:27.114006   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.114575   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.116274   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.117079   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.118797   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:27.122731 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:27.122741 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:27.188207 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:27.188227 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:29.720397 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:29.731016 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:29.731089 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:29.759816 2974151 cri.go:89] found id: ""
	I1217 10:49:29.759836 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.759843 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:29.759848 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:29.759909 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:29.784725 2974151 cri.go:89] found id: ""
	I1217 10:49:29.784739 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.784747 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:29.784752 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:29.784813 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:29.810710 2974151 cri.go:89] found id: ""
	I1217 10:49:29.810724 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.810731 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:29.810736 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:29.810796 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:29.835166 2974151 cri.go:89] found id: ""
	I1217 10:49:29.835180 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.835187 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:29.835196 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:29.835255 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:29.862724 2974151 cri.go:89] found id: ""
	I1217 10:49:29.862738 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.862745 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:29.862750 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:29.862814 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:29.887572 2974151 cri.go:89] found id: ""
	I1217 10:49:29.887590 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.887597 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:29.887608 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:29.887676 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:29.911679 2974151 cri.go:89] found id: ""
	I1217 10:49:29.911693 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.911700 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:29.911708 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:29.911717 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:29.974573 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:29.974595 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:30.028175 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:30.028195 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:30.102876 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:30.102898 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:30.120802 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:30.120826 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:30.191763 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:30.183313   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.184024   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.185583   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.186151   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.187552   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:30.183313   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.184024   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.185583   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.186151   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.187552   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:32.692593 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:32.703024 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:32.703087 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:32.733277 2974151 cri.go:89] found id: ""
	I1217 10:49:32.733302 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.733310 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:32.733317 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:32.733384 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:32.763219 2974151 cri.go:89] found id: ""
	I1217 10:49:32.763234 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.763241 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:32.763246 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:32.763304 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:32.793128 2974151 cri.go:89] found id: ""
	I1217 10:49:32.793143 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.793150 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:32.793155 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:32.793213 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:32.824178 2974151 cri.go:89] found id: ""
	I1217 10:49:32.824194 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.824201 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:32.824206 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:32.824271 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:32.854145 2974151 cri.go:89] found id: ""
	I1217 10:49:32.854170 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.854178 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:32.854183 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:32.854251 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:32.879767 2974151 cri.go:89] found id: ""
	I1217 10:49:32.879797 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.879804 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:32.879809 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:32.879899 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:32.909819 2974151 cri.go:89] found id: ""
	I1217 10:49:32.909833 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.909842 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:32.909849 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:32.909859 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:32.938841 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:32.938857 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:32.995133 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:32.995156 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:33.014953 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:33.014974 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:33.085045 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:33.075667   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.076471   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.078226   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.078820   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.080383   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:33.075667   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.076471   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.078226   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.078820   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.080383   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:33.085054 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:33.085065 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:35.651037 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:35.661187 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:35.661246 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:35.687255 2974151 cri.go:89] found id: ""
	I1217 10:49:35.687270 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.687277 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:35.687282 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:35.687340 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:35.713953 2974151 cri.go:89] found id: ""
	I1217 10:49:35.713967 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.713974 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:35.713980 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:35.714040 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:35.742852 2974151 cri.go:89] found id: ""
	I1217 10:49:35.742866 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.742874 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:35.742879 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:35.742937 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:35.768219 2974151 cri.go:89] found id: ""
	I1217 10:49:35.768233 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.768240 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:35.768246 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:35.768314 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:35.792498 2974151 cri.go:89] found id: ""
	I1217 10:49:35.792512 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.792519 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:35.792524 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:35.792583 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:35.818063 2974151 cri.go:89] found id: ""
	I1217 10:49:35.818077 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.818084 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:35.818089 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:35.818147 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:35.843090 2974151 cri.go:89] found id: ""
	I1217 10:49:35.843105 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.843111 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:35.843119 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:35.843129 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:35.899655 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:35.899673 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:35.916834 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:35.916850 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:35.982052 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:35.973406   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.974102   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.975751   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.976284   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.977956   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:35.973406   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.974102   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.975751   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.976284   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.977956   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:35.982062 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:35.982075 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:36.049729 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:36.049750 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:38.582447 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:38.592471 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:38.592528 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:38.617757 2974151 cri.go:89] found id: ""
	I1217 10:49:38.617772 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.617779 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:38.617786 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:38.617845 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:38.647228 2974151 cri.go:89] found id: ""
	I1217 10:49:38.647242 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.647249 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:38.647254 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:38.647312 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:38.672309 2974151 cri.go:89] found id: ""
	I1217 10:49:38.672324 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.672331 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:38.672336 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:38.672395 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:38.699575 2974151 cri.go:89] found id: ""
	I1217 10:49:38.699590 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.699597 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:38.699603 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:38.699660 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:38.729276 2974151 cri.go:89] found id: ""
	I1217 10:49:38.729290 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.729297 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:38.729303 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:38.729361 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:38.757110 2974151 cri.go:89] found id: ""
	I1217 10:49:38.757124 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.757131 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:38.757137 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:38.757197 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:38.783523 2974151 cri.go:89] found id: ""
	I1217 10:49:38.783537 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.783544 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:38.783551 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:38.783562 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:38.854691 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:38.846060   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.846723   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.848354   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.849037   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.850802   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:38.846060   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.846723   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.848354   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.849037   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.850802   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:38.854701 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:38.854713 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:38.918821 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:38.918843 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:38.947201 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:38.947217 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:39.004566 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:39.004587 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:41.522977 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:41.536227 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:41.536288 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:41.566436 2974151 cri.go:89] found id: ""
	I1217 10:49:41.566451 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.566458 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:41.566466 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:41.566527 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:41.599863 2974151 cri.go:89] found id: ""
	I1217 10:49:41.599879 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.599886 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:41.599892 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:41.599956 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:41.631187 2974151 cri.go:89] found id: ""
	I1217 10:49:41.631202 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.631209 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:41.631216 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:41.631274 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:41.658402 2974151 cri.go:89] found id: ""
	I1217 10:49:41.658416 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.658423 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:41.658428 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:41.658487 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:41.686724 2974151 cri.go:89] found id: ""
	I1217 10:49:41.686738 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.686745 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:41.686751 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:41.686809 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:41.721194 2974151 cri.go:89] found id: ""
	I1217 10:49:41.721208 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.721215 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:41.721220 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:41.721279 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:41.750295 2974151 cri.go:89] found id: ""
	I1217 10:49:41.750309 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.750316 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:41.750323 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:41.750334 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:41.779389 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:41.779406 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:41.837692 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:41.837715 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:41.854830 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:41.854847 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:41.919451 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:41.911491   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.912035   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.913552   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.914095   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.915570   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:41.911491   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.912035   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.913552   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.914095   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.915570   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:41.919461 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:41.919470 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:44.482271 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:44.492656 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:44.492720 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:44.530744 2974151 cri.go:89] found id: ""
	I1217 10:49:44.530758 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.530765 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:44.530770 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:44.530831 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:44.556602 2974151 cri.go:89] found id: ""
	I1217 10:49:44.556616 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.556624 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:44.556629 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:44.556687 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:44.582820 2974151 cri.go:89] found id: ""
	I1217 10:49:44.582835 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.582842 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:44.582847 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:44.582906 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:44.607152 2974151 cri.go:89] found id: ""
	I1217 10:49:44.607166 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.607173 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:44.607184 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:44.607244 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:44.634565 2974151 cri.go:89] found id: ""
	I1217 10:49:44.634579 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.634587 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:44.634592 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:44.634662 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:44.661979 2974151 cri.go:89] found id: ""
	I1217 10:49:44.661993 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.662000 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:44.662005 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:44.662066 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:44.686675 2974151 cri.go:89] found id: ""
	I1217 10:49:44.686697 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.686705 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:44.686713 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:44.686722 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:44.743011 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:44.743033 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:44.759816 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:44.759833 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:44.824819 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:44.816544   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.817205   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.818745   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.819310   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.820870   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:44.816544   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.817205   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.818745   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.819310   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.820870   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:44.824830 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:44.824841 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:44.890788 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:44.890807 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:47.418865 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:47.429392 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:47.429467 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:47.454629 2974151 cri.go:89] found id: ""
	I1217 10:49:47.454643 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.454650 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:47.454655 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:47.454766 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:47.480876 2974151 cri.go:89] found id: ""
	I1217 10:49:47.480890 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.480897 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:47.480902 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:47.480970 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:47.512027 2974151 cri.go:89] found id: ""
	I1217 10:49:47.512041 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.512054 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:47.512060 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:47.512120 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:47.539586 2974151 cri.go:89] found id: ""
	I1217 10:49:47.539600 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.539608 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:47.539613 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:47.539671 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:47.566423 2974151 cri.go:89] found id: ""
	I1217 10:49:47.566437 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.566444 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:47.566450 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:47.566507 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:47.592329 2974151 cri.go:89] found id: ""
	I1217 10:49:47.592343 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.592350 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:47.592355 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:47.592442 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:47.617999 2974151 cri.go:89] found id: ""
	I1217 10:49:47.618013 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.618020 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:47.618028 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:47.618037 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:47.678218 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:47.678240 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:47.695642 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:47.695659 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:47.762123 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:47.753095   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.754063   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.755748   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.756187   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.757812   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:47.753095   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.754063   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.755748   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.756187   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.757812   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:47.762133 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:47.762146 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:47.828387 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:47.828408 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:50.363629 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:50.373970 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:50.374026 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:50.398664 2974151 cri.go:89] found id: ""
	I1217 10:49:50.398678 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.398685 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:50.398690 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:50.398749 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:50.424119 2974151 cri.go:89] found id: ""
	I1217 10:49:50.424132 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.424139 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:50.424144 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:50.424203 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:50.450501 2974151 cri.go:89] found id: ""
	I1217 10:49:50.450516 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.450523 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:50.450529 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:50.450591 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:50.479279 2974151 cri.go:89] found id: ""
	I1217 10:49:50.479330 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.479338 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:50.479344 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:50.479402 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:50.514044 2974151 cri.go:89] found id: ""
	I1217 10:49:50.514058 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.514065 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:50.514070 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:50.514147 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:50.550857 2974151 cri.go:89] found id: ""
	I1217 10:49:50.550871 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.550878 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:50.550883 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:50.550943 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:50.586702 2974151 cri.go:89] found id: ""
	I1217 10:49:50.586716 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.586724 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:50.586731 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:50.586740 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:50.649317 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:50.649338 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:50.681689 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:50.681706 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:50.739069 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:50.739092 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:50.756760 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:50.756777 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:50.826240 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:50.816693   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.817339   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.819115   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.819743   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.821406   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:50.816693   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.817339   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.819115   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.819743   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.821406   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:53.327009 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:53.338042 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:53.338105 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:53.364395 2974151 cri.go:89] found id: ""
	I1217 10:49:53.364409 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.364437 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:53.364443 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:53.364504 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:53.391405 2974151 cri.go:89] found id: ""
	I1217 10:49:53.391418 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.391425 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:53.391435 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:53.391495 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:53.415894 2974151 cri.go:89] found id: ""
	I1217 10:49:53.415909 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.415916 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:53.415921 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:53.415987 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:53.441489 2974151 cri.go:89] found id: ""
	I1217 10:49:53.441505 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.441512 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:53.441518 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:53.441577 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:53.470465 2974151 cri.go:89] found id: ""
	I1217 10:49:53.470480 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.470487 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:53.470492 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:53.470580 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:53.496777 2974151 cri.go:89] found id: ""
	I1217 10:49:53.496791 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.496798 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:53.496804 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:53.496862 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:53.522462 2974151 cri.go:89] found id: ""
	I1217 10:49:53.522477 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.522484 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:53.522492 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:53.522503 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:53.587962 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:53.587981 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:53.605021 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:53.605038 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:53.674653 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:53.666595   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.667148   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.668629   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.669055   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.670469   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:53.666595   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.667148   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.668629   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.669055   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.670469   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:53.674671 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:53.674682 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:53.736888 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:53.736908 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:56.264574 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:56.274948 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:56.275019 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:56.306086 2974151 cri.go:89] found id: ""
	I1217 10:49:56.306108 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.306116 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:56.306122 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:56.306189 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:56.331503 2974151 cri.go:89] found id: ""
	I1217 10:49:56.331517 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.331524 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:56.331529 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:56.331588 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:56.357713 2974151 cri.go:89] found id: ""
	I1217 10:49:56.357727 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.357734 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:56.357740 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:56.357804 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:56.386307 2974151 cri.go:89] found id: ""
	I1217 10:49:56.386322 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.386329 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:56.386335 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:56.386392 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:56.411103 2974151 cri.go:89] found id: ""
	I1217 10:49:56.411116 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.411148 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:56.411154 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:56.411210 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:56.438603 2974151 cri.go:89] found id: ""
	I1217 10:49:56.438617 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.438632 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:56.438638 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:56.438700 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:56.463485 2974151 cri.go:89] found id: ""
	I1217 10:49:56.463499 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.463506 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:56.463513 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:56.463526 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:56.480151 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:56.480170 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:56.564122 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:56.555873   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.556612   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.558127   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.558422   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.559904   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:56.555873   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.556612   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.558127   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.558422   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.559904   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:56.564133 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:56.564152 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:56.631606 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:56.631625 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:56.658603 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:56.658621 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:59.216557 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:59.226542 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:59.226605 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:59.250485 2974151 cri.go:89] found id: ""
	I1217 10:49:59.250501 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.250522 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:59.250528 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:59.250597 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:59.275922 2974151 cri.go:89] found id: ""
	I1217 10:49:59.275936 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.275945 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:59.275960 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:59.276021 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:59.305346 2974151 cri.go:89] found id: ""
	I1217 10:49:59.305372 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.305380 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:59.305386 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:59.305454 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:59.329784 2974151 cri.go:89] found id: ""
	I1217 10:49:59.329799 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.329806 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:59.329812 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:59.329870 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:59.353939 2974151 cri.go:89] found id: ""
	I1217 10:49:59.353953 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.353961 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:59.353968 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:59.354030 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:59.379444 2974151 cri.go:89] found id: ""
	I1217 10:49:59.379458 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.379465 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:59.379471 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:59.379535 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:59.404346 2974151 cri.go:89] found id: ""
	I1217 10:49:59.404360 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.404367 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:59.404374 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:59.404385 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:59.421191 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:59.421209 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:59.484153 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:59.476366   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.477052   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.478594   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.478902   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.480341   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:59.476366   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.477052   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.478594   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.478902   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.480341   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:59.484164 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:59.484177 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:59.553474 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:59.553493 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:59.587183 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:59.587199 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:02.144181 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:02.155199 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:02.155292 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:02.188757 2974151 cri.go:89] found id: ""
	I1217 10:50:02.188773 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.188780 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:02.188785 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:02.188851 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:02.219315 2974151 cri.go:89] found id: ""
	I1217 10:50:02.219330 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.219337 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:02.219342 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:02.219406 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:02.244595 2974151 cri.go:89] found id: ""
	I1217 10:50:02.244609 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.244616 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:02.244622 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:02.244684 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:02.270632 2974151 cri.go:89] found id: ""
	I1217 10:50:02.270647 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.270654 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:02.270659 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:02.270718 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:02.296393 2974151 cri.go:89] found id: ""
	I1217 10:50:02.296407 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.296447 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:02.296454 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:02.296521 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:02.326837 2974151 cri.go:89] found id: ""
	I1217 10:50:02.326851 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.326859 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:02.326868 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:02.326931 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:02.356502 2974151 cri.go:89] found id: ""
	I1217 10:50:02.356517 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.356527 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:02.356536 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:02.356548 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:02.434224 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:02.417603   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.418251   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.426024   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.428283   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.428822   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:02.417603   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.418251   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.426024   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.428283   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.428822   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:02.434234 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:02.434244 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:02.502034 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:02.502055 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:02.541286 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:02.541303 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:02.606116 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:02.606137 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:05.125496 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:05.136157 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:05.136217 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:05.160937 2974151 cri.go:89] found id: ""
	I1217 10:50:05.160952 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.160959 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:05.160964 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:05.161024 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:05.185873 2974151 cri.go:89] found id: ""
	I1217 10:50:05.185887 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.185894 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:05.185900 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:05.185999 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:05.212646 2974151 cri.go:89] found id: ""
	I1217 10:50:05.212676 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.212684 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:05.212690 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:05.212767 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:05.238323 2974151 cri.go:89] found id: ""
	I1217 10:50:05.238340 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.238347 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:05.238353 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:05.238414 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:05.263764 2974151 cri.go:89] found id: ""
	I1217 10:50:05.263779 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.263786 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:05.263792 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:05.263849 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:05.289054 2974151 cri.go:89] found id: ""
	I1217 10:50:05.289069 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.289076 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:05.289081 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:05.289144 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:05.314515 2974151 cri.go:89] found id: ""
	I1217 10:50:05.314530 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.314538 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:05.314546 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:05.314556 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:05.380980 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:05.381002 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:05.414207 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:05.414222 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:05.472281 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:05.472301 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:05.489358 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:05.489375 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:05.571554 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:05.562906   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.563808   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.565527   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.566129   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.567151   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:05.562906   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.563808   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.565527   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.566129   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.567151   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:08.071830 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:08.082387 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:08.082462 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:08.110539 2974151 cri.go:89] found id: ""
	I1217 10:50:08.110553 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.110561 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:08.110566 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:08.110629 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:08.135732 2974151 cri.go:89] found id: ""
	I1217 10:50:08.135746 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.135754 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:08.135760 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:08.135828 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:08.162274 2974151 cri.go:89] found id: ""
	I1217 10:50:08.162289 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.162296 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:08.162302 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:08.162359 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:08.187522 2974151 cri.go:89] found id: ""
	I1217 10:50:08.187536 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.187543 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:08.187549 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:08.187618 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:08.212868 2974151 cri.go:89] found id: ""
	I1217 10:50:08.212883 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.212890 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:08.212896 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:08.212958 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:08.236894 2974151 cri.go:89] found id: ""
	I1217 10:50:08.236908 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.236915 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:08.236921 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:08.236981 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:08.262293 2974151 cri.go:89] found id: ""
	I1217 10:50:08.262308 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.262315 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:08.262322 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:08.262332 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:08.320099 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:08.320118 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:08.337595 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:08.337611 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:08.404535 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:08.395902   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.396655   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.398294   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.398971   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.400705   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:08.395902   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.396655   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.398294   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.398971   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.400705   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:08.404545 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:08.404557 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:08.467318 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:08.467338 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:11.014160 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:11.025076 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:11.025146 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:11.050236 2974151 cri.go:89] found id: ""
	I1217 10:50:11.050252 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.050260 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:11.050265 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:11.050329 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:11.081289 2974151 cri.go:89] found id: ""
	I1217 10:50:11.081311 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.081318 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:11.081324 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:11.081385 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:11.111117 2974151 cri.go:89] found id: ""
	I1217 10:50:11.111134 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.111141 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:11.111146 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:11.111209 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:11.137886 2974151 cri.go:89] found id: ""
	I1217 10:50:11.137900 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.137908 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:11.137913 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:11.137972 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:11.164080 2974151 cri.go:89] found id: ""
	I1217 10:50:11.164096 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.164104 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:11.164119 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:11.164183 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:11.194241 2974151 cri.go:89] found id: ""
	I1217 10:50:11.194256 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.194264 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:11.194269 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:11.194331 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:11.220644 2974151 cri.go:89] found id: ""
	I1217 10:50:11.220659 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.220666 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:11.220673 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:11.220687 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:11.283052 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:11.283070 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:11.310700 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:11.310717 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:11.366749 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:11.366769 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:11.383957 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:11.383975 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:11.451001 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:11.442629   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.443048   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.444733   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.445416   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.447157   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:11.442629   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.443048   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.444733   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.445416   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.447157   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:13.952741 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:13.962784 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:13.962846 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:13.987247 2974151 cri.go:89] found id: ""
	I1217 10:50:13.987262 2974151 logs.go:282] 0 containers: []
	W1217 10:50:13.987269 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:13.987274 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:13.987340 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:14.012962 2974151 cri.go:89] found id: ""
	I1217 10:50:14.012977 2974151 logs.go:282] 0 containers: []
	W1217 10:50:14.012984 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:14.012990 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:14.013058 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:14.038181 2974151 cri.go:89] found id: ""
	I1217 10:50:14.038195 2974151 logs.go:282] 0 containers: []
	W1217 10:50:14.038203 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:14.038208 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:14.038266 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:14.062700 2974151 cri.go:89] found id: ""
	I1217 10:50:14.062715 2974151 logs.go:282] 0 containers: []
	W1217 10:50:14.062723 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:14.062728 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:14.062785 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:14.093364 2974151 cri.go:89] found id: ""
	I1217 10:50:14.093386 2974151 logs.go:282] 0 containers: []
	W1217 10:50:14.093393 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:14.093399 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:14.093457 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:14.118504 2974151 cri.go:89] found id: ""
	I1217 10:50:14.118519 2974151 logs.go:282] 0 containers: []
	W1217 10:50:14.118525 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:14.118531 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:14.118596 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:14.143182 2974151 cri.go:89] found id: ""
	I1217 10:50:14.143198 2974151 logs.go:282] 0 containers: []
	W1217 10:50:14.143204 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:14.143212 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:14.143223 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:14.201003 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:14.201024 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:14.218136 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:14.218153 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:14.291347 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:14.280379   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.281686   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.285094   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.285633   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.287421   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:14.280379   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.281686   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.285094   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.285633   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.287421   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:14.291358 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:14.291370 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:14.354518 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:14.354541 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:16.888907 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:16.899327 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:16.899396 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:16.924553 2974151 cri.go:89] found id: ""
	I1217 10:50:16.924572 2974151 logs.go:282] 0 containers: []
	W1217 10:50:16.924580 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:16.924586 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:16.924646 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:16.950729 2974151 cri.go:89] found id: ""
	I1217 10:50:16.950743 2974151 logs.go:282] 0 containers: []
	W1217 10:50:16.950750 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:16.950756 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:16.950811 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:16.978167 2974151 cri.go:89] found id: ""
	I1217 10:50:16.978181 2974151 logs.go:282] 0 containers: []
	W1217 10:50:16.978189 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:16.978193 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:16.978254 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:17.005223 2974151 cri.go:89] found id: ""
	I1217 10:50:17.005239 2974151 logs.go:282] 0 containers: []
	W1217 10:50:17.005247 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:17.005253 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:17.005336 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:17.031301 2974151 cri.go:89] found id: ""
	I1217 10:50:17.031315 2974151 logs.go:282] 0 containers: []
	W1217 10:50:17.031323 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:17.031328 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:17.031393 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:17.058782 2974151 cri.go:89] found id: ""
	I1217 10:50:17.058796 2974151 logs.go:282] 0 containers: []
	W1217 10:50:17.058804 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:17.058810 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:17.058869 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:17.084580 2974151 cri.go:89] found id: ""
	I1217 10:50:17.084595 2974151 logs.go:282] 0 containers: []
	W1217 10:50:17.084603 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:17.084611 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:17.084628 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:17.144045 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:17.144067 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:17.161459 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:17.161476 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:17.230344 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:17.221052   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.221467   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.224663   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.225044   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.226301   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:17.221052   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.221467   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.224663   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.225044   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.226301   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:17.230353 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:17.230364 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:17.292978 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:17.292998 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:19.828581 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:19.838853 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:19.838914 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:19.864198 2974151 cri.go:89] found id: ""
	I1217 10:50:19.864213 2974151 logs.go:282] 0 containers: []
	W1217 10:50:19.864220 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:19.864225 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:19.864284 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:19.899721 2974151 cri.go:89] found id: ""
	I1217 10:50:19.899735 2974151 logs.go:282] 0 containers: []
	W1217 10:50:19.899758 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:19.899764 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:19.899837 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:19.928330 2974151 cri.go:89] found id: ""
	I1217 10:50:19.928345 2974151 logs.go:282] 0 containers: []
	W1217 10:50:19.928352 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:19.928356 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:19.928445 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:19.954497 2974151 cri.go:89] found id: ""
	I1217 10:50:19.954514 2974151 logs.go:282] 0 containers: []
	W1217 10:50:19.954538 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:19.954545 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:19.954608 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:19.980091 2974151 cri.go:89] found id: ""
	I1217 10:50:19.980105 2974151 logs.go:282] 0 containers: []
	W1217 10:50:19.980112 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:19.980118 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:19.980184 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:20.010659 2974151 cri.go:89] found id: ""
	I1217 10:50:20.010676 2974151 logs.go:282] 0 containers: []
	W1217 10:50:20.010685 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:20.010691 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:20.010767 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:20.043088 2974151 cri.go:89] found id: ""
	I1217 10:50:20.043104 2974151 logs.go:282] 0 containers: []
	W1217 10:50:20.043113 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:20.043121 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:20.043132 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:20.100529 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:20.100550 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:20.118575 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:20.118591 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:20.187144 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:20.178717   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.179517   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.181042   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.181412   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.182990   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:20.178717   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.179517   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.181042   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.181412   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.182990   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:20.187155 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:20.187167 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:20.249393 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:20.249414 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:22.778795 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:22.790536 2974151 kubeadm.go:602] duration metric: took 4m2.042602584s to restartPrimaryControlPlane
	W1217 10:50:22.790601 2974151 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 10:50:22.790675 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 10:50:23.205315 2974151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 10:50:23.219008 2974151 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 10:50:23.227117 2974151 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 10:50:23.227176 2974151 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 10:50:23.235370 2974151 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 10:50:23.235380 2974151 kubeadm.go:158] found existing configuration files:
	
	I1217 10:50:23.235436 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 10:50:23.243539 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 10:50:23.243597 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 10:50:23.251153 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 10:50:23.259288 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 10:50:23.259364 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 10:50:23.267370 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 10:50:23.275727 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 10:50:23.275787 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 10:50:23.283930 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 10:50:23.292280 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 10:50:23.292340 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 10:50:23.300010 2974151 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 10:50:23.340550 2974151 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 10:50:23.340717 2974151 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 10:50:23.412202 2974151 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 10:50:23.412287 2974151 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 10:50:23.412322 2974151 kubeadm.go:319] OS: Linux
	I1217 10:50:23.412377 2974151 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 10:50:23.412441 2974151 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 10:50:23.412489 2974151 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 10:50:23.412536 2974151 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 10:50:23.412585 2974151 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 10:50:23.412632 2974151 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 10:50:23.412677 2974151 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 10:50:23.412724 2974151 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 10:50:23.412769 2974151 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 10:50:23.486890 2974151 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 10:50:23.486989 2974151 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 10:50:23.487074 2974151 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 10:50:23.492949 2974151 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 10:50:23.496478 2974151 out.go:252]   - Generating certificates and keys ...
	I1217 10:50:23.496568 2974151 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 10:50:23.496637 2974151 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 10:50:23.496718 2974151 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 10:50:23.496782 2974151 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 10:50:23.496856 2974151 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 10:50:23.496912 2974151 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 10:50:23.496979 2974151 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 10:50:23.497043 2974151 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 10:50:23.497122 2974151 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 10:50:23.497199 2974151 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 10:50:23.497239 2974151 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 10:50:23.497303 2974151 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 10:50:23.659882 2974151 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 10:50:23.806390 2974151 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 10:50:23.994170 2974151 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 10:50:24.254389 2974151 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 10:50:24.616203 2974151 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 10:50:24.616885 2974151 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 10:50:24.619452 2974151 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 10:50:24.622875 2974151 out.go:252]   - Booting up control plane ...
	I1217 10:50:24.622979 2974151 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 10:50:24.623060 2974151 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 10:50:24.623134 2974151 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 10:50:24.643299 2974151 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 10:50:24.643404 2974151 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 10:50:24.652837 2974151 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 10:50:24.652937 2974151 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 10:50:24.652975 2974151 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 10:50:24.787245 2974151 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 10:50:24.787354 2974151 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 10:54:24.787078 2974151 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000331472s
	I1217 10:54:24.787103 2974151 kubeadm.go:319] 
	I1217 10:54:24.787156 2974151 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 10:54:24.787187 2974151 kubeadm.go:319] 	- The kubelet is not running
	I1217 10:54:24.787285 2974151 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 10:54:24.787290 2974151 kubeadm.go:319] 
	I1217 10:54:24.787387 2974151 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 10:54:24.787416 2974151 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 10:54:24.787445 2974151 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 10:54:24.787448 2974151 kubeadm.go:319] 
	I1217 10:54:24.791515 2974151 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 10:54:24.791934 2974151 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 10:54:24.792041 2974151 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 10:54:24.792274 2974151 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 10:54:24.792279 2974151 kubeadm.go:319] 
	I1217 10:54:24.792347 2974151 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1217 10:54:24.792486 2974151 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000331472s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 10:54:24.792573 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 10:54:25.209097 2974151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 10:54:25.222902 2974151 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 10:54:25.222960 2974151 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 10:54:25.231173 2974151 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 10:54:25.231182 2974151 kubeadm.go:158] found existing configuration files:
	
	I1217 10:54:25.231234 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 10:54:25.239239 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 10:54:25.239293 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 10:54:25.246851 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 10:54:25.254681 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 10:54:25.254734 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 10:54:25.262252 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 10:54:25.270359 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 10:54:25.270417 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 10:54:25.277936 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 10:54:25.286063 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 10:54:25.286121 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 10:54:25.293834 2974151 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 10:54:25.333226 2974151 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 10:54:25.333620 2974151 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 10:54:25.403386 2974151 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 10:54:25.403450 2974151 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 10:54:25.403488 2974151 kubeadm.go:319] OS: Linux
	I1217 10:54:25.403533 2974151 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 10:54:25.403579 2974151 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 10:54:25.403625 2974151 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 10:54:25.403672 2974151 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 10:54:25.403719 2974151 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 10:54:25.403765 2974151 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 10:54:25.403809 2974151 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 10:54:25.403855 2974151 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 10:54:25.403900 2974151 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 10:54:25.478252 2974151 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 10:54:25.478355 2974151 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 10:54:25.478445 2974151 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 10:54:25.483628 2974151 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 10:54:25.487136 2974151 out.go:252]   - Generating certificates and keys ...
	I1217 10:54:25.487234 2974151 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 10:54:25.487310 2974151 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 10:54:25.487433 2974151 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 10:54:25.487529 2974151 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 10:54:25.487605 2974151 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 10:54:25.487662 2974151 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 10:54:25.487729 2974151 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 10:54:25.487795 2974151 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 10:54:25.487917 2974151 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 10:54:25.487994 2974151 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 10:54:25.488380 2974151 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 10:54:25.488481 2974151 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 10:54:26.117291 2974151 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 10:54:26.756756 2974151 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 10:54:27.066378 2974151 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 10:54:27.235545 2974151 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 10:54:27.468773 2974151 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 10:54:27.469453 2974151 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 10:54:27.472021 2974151 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 10:54:27.475042 2974151 out.go:252]   - Booting up control plane ...
	I1217 10:54:27.475141 2974151 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 10:54:27.475225 2974151 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 10:54:27.475306 2974151 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 10:54:27.497360 2974151 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 10:54:27.497461 2974151 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 10:54:27.505167 2974151 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 10:54:27.506337 2974151 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 10:54:27.506384 2974151 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 10:54:27.645391 2974151 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 10:54:27.645508 2974151 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 10:58:27.644872 2974151 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000353032s
	I1217 10:58:27.644897 2974151 kubeadm.go:319] 
	I1217 10:58:27.644952 2974151 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 10:58:27.644984 2974151 kubeadm.go:319] 	- The kubelet is not running
	I1217 10:58:27.645087 2974151 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 10:58:27.645092 2974151 kubeadm.go:319] 
	I1217 10:58:27.645195 2974151 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 10:58:27.645226 2974151 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 10:58:27.645255 2974151 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 10:58:27.645258 2974151 kubeadm.go:319] 
	I1217 10:58:27.649050 2974151 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 10:58:27.649524 2974151 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 10:58:27.649634 2974151 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 10:58:27.649875 2974151 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 10:58:27.649881 2974151 kubeadm.go:319] 
	I1217 10:58:27.649949 2974151 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 10:58:27.650003 2974151 kubeadm.go:403] duration metric: took 12m6.936466746s to StartCluster
	I1217 10:58:27.650034 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:58:27.650094 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:58:27.678841 2974151 cri.go:89] found id: ""
	I1217 10:58:27.678855 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.678862 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:58:27.678868 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:58:27.678928 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:58:27.704494 2974151 cri.go:89] found id: ""
	I1217 10:58:27.704507 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.704514 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:58:27.704520 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:58:27.704578 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:58:27.729757 2974151 cri.go:89] found id: ""
	I1217 10:58:27.729770 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.729777 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:58:27.729783 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:58:27.729840 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:58:27.757253 2974151 cri.go:89] found id: ""
	I1217 10:58:27.757267 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.757274 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:58:27.757284 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:58:27.757343 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:58:27.781735 2974151 cri.go:89] found id: ""
	I1217 10:58:27.781749 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.781756 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:58:27.781760 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:58:27.781817 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:58:27.806628 2974151 cri.go:89] found id: ""
	I1217 10:58:27.806642 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.806649 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:58:27.806655 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:58:27.806713 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:58:27.831983 2974151 cri.go:89] found id: ""
	I1217 10:58:27.831997 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.832004 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:58:27.832013 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:58:27.832023 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:58:27.889768 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:58:27.889788 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:58:27.906789 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:58:27.906806 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:58:27.971294 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:58:27.963241   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.963807   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.965347   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.965829   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.967335   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:58:27.963241   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.963807   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.965347   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.965829   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.967335   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:58:27.971304 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:58:27.971317 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:58:28.034286 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:58:28.034308 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 10:58:28.076352 2974151 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000353032s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 10:58:28.076384 2974151 out.go:285] * 
	W1217 10:58:28.076460 2974151 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000353032s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 10:58:28.076478 2974151 out.go:285] * 
	W1217 10:58:28.078620 2974151 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 10:58:28.084354 2974151 out.go:203] 
	W1217 10:58:28.086597 2974151 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000353032s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 10:58:28.086645 2974151 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 10:58:28.086668 2974151 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 10:58:28.089656 2974151 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.366987997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367000042Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367054433Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367069325Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367089255Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367101152Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367110883Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367125414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367141668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367171452Z" level=info msg="Connect containerd service"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367467445Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.368062180Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.389242103Z" level=info msg="Start subscribing containerd event"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.389467722Z" level=info msg="Start recovering state"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.389473490Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.390097098Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430326171Z" level=info msg="Start event monitor"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430520850Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430594670Z" level=info msg="Start streaming server"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430655559Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430712788Z" level=info msg="runtime interface starting up..."
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430945234Z" level=info msg="starting plugins..."
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430989147Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.431326009Z" level=info msg="containerd successfully booted in 0.084806s"
	Dec 17 10:46:19 functional-232588 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 11:00:49.654111   23176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 11:00:49.654813   23176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 11:00:49.656483   23176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 11:00:49.657047   23176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 11:00:49.658513   23176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +26.532481] overlayfs: idmapped layers are currently not supported
	[Dec17 09:26] overlayfs: idmapped layers are currently not supported
	[Dec17 09:27] overlayfs: idmapped layers are currently not supported
	[Dec17 09:29] overlayfs: idmapped layers are currently not supported
	[Dec17 09:31] overlayfs: idmapped layers are currently not supported
	[Dec17 09:41] overlayfs: idmapped layers are currently not supported
	[Dec17 09:43] overlayfs: idmapped layers are currently not supported
	[Dec17 09:44] overlayfs: idmapped layers are currently not supported
	[  +5.066669] overlayfs: idmapped layers are currently not supported
	[ +38.827173] overlayfs: idmapped layers are currently not supported
	[Dec17 09:45] overlayfs: idmapped layers are currently not supported
	[Dec17 09:46] overlayfs: idmapped layers are currently not supported
	[Dec17 09:48] overlayfs: idmapped layers are currently not supported
	[  +5.468161] overlayfs: idmapped layers are currently not supported
	[Dec17 09:49] overlayfs: idmapped layers are currently not supported
	[  +4.263444] overlayfs: idmapped layers are currently not supported
	[Dec17 09:50] overlayfs: idmapped layers are currently not supported
	[Dec17 10:07] overlayfs: idmapped layers are currently not supported
	[Dec17 10:08] overlayfs: idmapped layers are currently not supported
	[Dec17 10:10] overlayfs: idmapped layers are currently not supported
	[Dec17 10:11] overlayfs: idmapped layers are currently not supported
	[Dec17 10:13] overlayfs: idmapped layers are currently not supported
	[Dec17 10:15] overlayfs: idmapped layers are currently not supported
	[Dec17 10:16] overlayfs: idmapped layers are currently not supported
	[Dec17 10:21] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 11:00:49 up 16:43,  0 user,  load average: 0.93, 0.37, 0.47
	Linux functional-232588 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 11:00:46 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:00:46 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 505.
	Dec 17 11:00:46 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:47 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:47 functional-232588 kubelet[22997]: E1217 11:00:47.076180   22997 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:00:47 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:00:47 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:00:47 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 506.
	Dec 17 11:00:47 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:47 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:47 functional-232588 kubelet[23031]: E1217 11:00:47.765227   23031 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:00:47 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:00:47 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:00:48 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 507.
	Dec 17 11:00:48 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:48 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:48 functional-232588 kubelet[23067]: E1217 11:00:48.455483   23067 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:00:48 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:00:48 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:00:49 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 508.
	Dec 17 11:00:49 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:49 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:49 functional-232588 kubelet[23092]: E1217 11:00:49.309500   23092 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:00:49 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:00:49 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588: exit status 2 (389.340882ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-232588" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (3.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (2.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-232588 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-232588 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (56.162794ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-232588 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-232588 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-232588 describe po hello-node-connect: exit status 1 (60.075215ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-232588 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-232588 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-232588 logs -l app=hello-node-connect: exit status 1 (76.698332ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-232588 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-232588 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-232588 describe svc hello-node-connect: exit status 1 (57.850014ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-232588 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-232588
helpers_test.go:244: (dbg) docker inspect functional-232588:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55",
	        "Created": "2025-12-17T10:31:38.417629873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2962990,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T10:31:38.484538313Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/hostname",
	        "HostsPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/hosts",
	        "LogPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55-json.log",
	        "Name": "/functional-232588",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-232588:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-232588",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55",
	                "LowerDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813-init/diff:/var/lib/docker/overlay2/aa1c3cb837db05afa9c265c464cc269fa9c11658f422c1c8858e1287ac952f12/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-232588",
	                "Source": "/var/lib/docker/volumes/functional-232588/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-232588",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-232588",
	                "name.minikube.sigs.k8s.io": "functional-232588",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cf91fdea6bf1c59282af014fad74b29d2456698ebca9b6be8c9685054b7d7df4",
	            "SandboxKey": "/var/run/docker/netns/cf91fdea6bf1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35733"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35734"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35737"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35735"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35736"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-232588": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:06:f9:5f:98:3e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf7a63e100f8f97f0cb760b53960a5eaeb1f5054bead79442486fc1d51c01ab7",
	                    "EndpointID": "a3f4d6de946fb68269c7790dce129934f895a840ec5cebbe87fc0d49cb575c44",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-232588",
	                        "f67a3fa8da99"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-232588 -n functional-232588
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232588 -n functional-232588: exit status 2 (294.956203ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                   ARGS                                                   │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-232588 ssh sudo crictl rmi registry.k8s.io/pause:latest                                       │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ ssh     │ functional-232588 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │                     │
	│ cache   │ functional-232588 cache reload                                                                           │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ ssh     │ functional-232588 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                         │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                      │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │ 17 Dec 25 10:46 UTC │
	│ kubectl │ functional-232588 kubectl -- --context functional-232588 get pods                                        │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │                     │
	│ start   │ -p functional-232588 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:46 UTC │                     │
	│ config  │ functional-232588 config unset cpus                                                                      │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:58 UTC │ 17 Dec 25 10:58 UTC │
	│ config  │ functional-232588 config get cpus                                                                        │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:58 UTC │                     │
	│ config  │ functional-232588 config set cpus 2                                                                      │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:58 UTC │ 17 Dec 25 10:58 UTC │
	│ config  │ functional-232588 config get cpus                                                                        │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:58 UTC │ 17 Dec 25 10:58 UTC │
	│ config  │ functional-232588 config unset cpus                                                                      │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:58 UTC │ 17 Dec 25 10:58 UTC │
	│ ssh     │ functional-232588 ssh -n functional-232588 sudo cat /home/docker/cp-test.txt                             │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:58 UTC │ 17 Dec 25 10:58 UTC │
	│ config  │ functional-232588 config get cpus                                                                        │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:58 UTC │                     │
	│ ssh     │ functional-232588 ssh echo hello                                                                         │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:58 UTC │ 17 Dec 25 10:58 UTC │
	│ ssh     │ functional-232588 ssh cat /etc/hostname                                                                  │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:58 UTC │ 17 Dec 25 10:58 UTC │
	│ ssh     │ functional-232588 ssh -n functional-232588 sudo cat /home/docker/cp-test.txt                             │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:58 UTC │ 17 Dec 25 10:58 UTC │
	│ tunnel  │ functional-232588 tunnel --alsologtostderr                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:58 UTC │                     │
	│ tunnel  │ functional-232588 tunnel --alsologtostderr                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:58 UTC │                     │
	│ cp      │ functional-232588 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:58 UTC │ 17 Dec 25 10:58 UTC │
	│ tunnel  │ functional-232588 tunnel --alsologtostderr                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:58 UTC │                     │
	│ ssh     │ functional-232588 ssh -n functional-232588 sudo cat /tmp/does/not/exist/cp-test.txt                      │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:58 UTC │ 17 Dec 25 10:58 UTC │
	│ addons  │ functional-232588 addons list                                                                            │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ addons  │ functional-232588 addons list -o json                                                                    │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 10:46:16
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 10:46:16.812860 2974151 out.go:360] Setting OutFile to fd 1 ...
	I1217 10:46:16.812963 2974151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:46:16.813007 2974151 out.go:374] Setting ErrFile to fd 2...
	I1217 10:46:16.813012 2974151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:46:16.813266 2974151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 10:46:16.813634 2974151 out.go:368] Setting JSON to false
	I1217 10:46:16.814461 2974151 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":59327,"bootTime":1765909050,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 10:46:16.814519 2974151 start.go:143] virtualization:  
	I1217 10:46:16.818066 2974151 out.go:179] * [functional-232588] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 10:46:16.822068 2974151 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 10:46:16.822151 2974151 notify.go:221] Checking for updates...
	I1217 10:46:16.828253 2974151 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 10:46:16.831316 2974151 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:46:16.834373 2974151 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 10:46:16.837375 2974151 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 10:46:16.840310 2974151 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 10:46:16.843753 2974151 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 10:46:16.843853 2974151 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 10:46:16.873076 2974151 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 10:46:16.873190 2974151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:46:16.938275 2974151 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 10:46:16.928760564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:46:16.938365 2974151 docker.go:319] overlay module found
	I1217 10:46:16.941603 2974151 out.go:179] * Using the docker driver based on existing profile
	I1217 10:46:16.944540 2974151 start.go:309] selected driver: docker
	I1217 10:46:16.944578 2974151 start.go:927] validating driver "docker" against &{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:46:16.944677 2974151 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 10:46:16.944788 2974151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:46:17.021027 2974151 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 10:46:17.010774366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:46:17.021436 2974151 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 10:46:17.021458 2974151 cni.go:84] Creating CNI manager for ""
	I1217 10:46:17.021510 2974151 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 10:46:17.021561 2974151 start.go:353] cluster config:
	{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:46:17.024793 2974151 out.go:179] * Starting "functional-232588" primary control-plane node in "functional-232588" cluster
	I1217 10:46:17.027565 2974151 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 10:46:17.030993 2974151 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 10:46:17.033790 2974151 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 10:46:17.033824 2974151 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1217 10:46:17.033833 2974151 cache.go:65] Caching tarball of preloaded images
	I1217 10:46:17.033918 2974151 preload.go:238] Found /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 10:46:17.033926 2974151 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1217 10:46:17.034031 2974151 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/config.json ...
	I1217 10:46:17.034251 2974151 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 10:46:17.058099 2974151 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 10:46:17.058112 2974151 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 10:46:17.058125 2974151 cache.go:243] Successfully downloaded all kic artifacts
	I1217 10:46:17.058155 2974151 start.go:360] acquireMachinesLock for functional-232588: {Name:mkb7828f32963a62377c74058da795e63eb677f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 10:46:17.058219 2974151 start.go:364] duration metric: took 48.59µs to acquireMachinesLock for "functional-232588"
	I1217 10:46:17.058239 2974151 start.go:96] Skipping create...Using existing machine configuration
	I1217 10:46:17.058243 2974151 fix.go:54] fixHost starting: 
	I1217 10:46:17.058504 2974151 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
	I1217 10:46:17.079212 2974151 fix.go:112] recreateIfNeeded on functional-232588: state=Running err=<nil>
	W1217 10:46:17.079241 2974151 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 10:46:17.082582 2974151 out.go:252] * Updating the running docker "functional-232588" container ...
	I1217 10:46:17.082612 2974151 machine.go:94] provisionDockerMachine start ...
	I1217 10:46:17.082696 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:17.100077 2974151 main.go:143] libmachine: Using SSH client type: native
	I1217 10:46:17.100208 2974151 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:46:17.100214 2974151 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 10:46:17.228063 2974151 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232588
	
	I1217 10:46:17.228077 2974151 ubuntu.go:182] provisioning hostname "functional-232588"
	I1217 10:46:17.228138 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:17.245852 2974151 main.go:143] libmachine: Using SSH client type: native
	I1217 10:46:17.245963 2974151 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:46:17.245971 2974151 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-232588 && echo "functional-232588" | sudo tee /etc/hostname
	I1217 10:46:17.390208 2974151 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232588
	
	I1217 10:46:17.390287 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:17.409213 2974151 main.go:143] libmachine: Using SSH client type: native
	I1217 10:46:17.409321 2974151 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35733 <nil> <nil>}
	I1217 10:46:17.409335 2974151 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-232588' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-232588/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-232588' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 10:46:17.545048 2974151 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 10:46:17.545065 2974151 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 10:46:17.545093 2974151 ubuntu.go:190] setting up certificates
	I1217 10:46:17.545101 2974151 provision.go:84] configureAuth start
	I1217 10:46:17.545170 2974151 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232588
	I1217 10:46:17.563036 2974151 provision.go:143] copyHostCerts
	I1217 10:46:17.563100 2974151 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 10:46:17.563107 2974151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 10:46:17.563182 2974151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 10:46:17.563277 2974151 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 10:46:17.563281 2974151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 10:46:17.563306 2974151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 10:46:17.563356 2974151 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 10:46:17.563359 2974151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 10:46:17.563381 2974151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 10:46:17.563426 2974151 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.functional-232588 san=[127.0.0.1 192.168.49.2 functional-232588 localhost minikube]
	I1217 10:46:17.716164 2974151 provision.go:177] copyRemoteCerts
	I1217 10:46:17.716219 2974151 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 10:46:17.716261 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:17.737388 2974151 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:46:17.836120 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 10:46:17.853626 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 10:46:17.870501 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 10:46:17.888326 2974151 provision.go:87] duration metric: took 343.201911ms to configureAuth
	I1217 10:46:17.888344 2974151 ubuntu.go:206] setting minikube options for container-runtime
	I1217 10:46:17.888621 2974151 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 10:46:17.888627 2974151 machine.go:97] duration metric: took 806.010876ms to provisionDockerMachine
	I1217 10:46:17.888635 2974151 start.go:293] postStartSetup for "functional-232588" (driver="docker")
	I1217 10:46:17.888646 2974151 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 10:46:17.888710 2974151 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 10:46:17.888750 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:17.905996 2974151 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:46:18.000491 2974151 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 10:46:18.012109 2974151 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 10:46:18.012146 2974151 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 10:46:18.012158 2974151 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 10:46:18.012224 2974151 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 10:46:18.012302 2974151 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 10:46:18.012378 2974151 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts -> hosts in /etc/test/nested/copy/2924574
	I1217 10:46:18.012531 2974151 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2924574
	I1217 10:46:18.021349 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 10:46:18.041286 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts --> /etc/test/nested/copy/2924574/hosts (40 bytes)
	I1217 10:46:18.060319 2974151 start.go:296] duration metric: took 171.669118ms for postStartSetup
	I1217 10:46:18.060436 2974151 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 10:46:18.060478 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:18.080470 2974151 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:46:18.173527 2974151 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 10:46:18.178353 2974151 fix.go:56] duration metric: took 1.120102504s for fixHost
	I1217 10:46:18.178370 2974151 start.go:83] releasing machines lock for "functional-232588", held for 1.120143316s
	I1217 10:46:18.178439 2974151 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232588
	I1217 10:46:18.195096 2974151 ssh_runner.go:195] Run: cat /version.json
	I1217 10:46:18.195136 2974151 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 10:46:18.195139 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:18.195194 2974151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
	I1217 10:46:18.218089 2974151 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:46:18.224561 2974151 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
	I1217 10:46:18.312237 2974151 ssh_runner.go:195] Run: systemctl --version
	I1217 10:46:18.401982 2974151 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 10:46:18.406442 2974151 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 10:46:18.406503 2974151 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 10:46:18.414452 2974151 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 10:46:18.414475 2974151 start.go:496] detecting cgroup driver to use...
	I1217 10:46:18.414504 2974151 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 10:46:18.414555 2974151 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 10:46:18.437080 2974151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 10:46:18.453263 2974151 docker.go:218] disabling cri-docker service (if available) ...
	I1217 10:46:18.453314 2974151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 10:46:18.469891 2974151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 10:46:18.484540 2974151 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 10:46:18.608866 2974151 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 10:46:18.727258 2974151 docker.go:234] disabling docker service ...
	I1217 10:46:18.727333 2974151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 10:46:18.742532 2974151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 10:46:18.755933 2974151 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 10:46:18.876736 2974151 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 10:46:18.997189 2974151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 10:46:19.012062 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 10:46:19.033558 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 10:46:19.046193 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 10:46:19.056269 2974151 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 10:46:19.056333 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 10:46:19.066650 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 10:46:19.076242 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 10:46:19.086026 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 10:46:19.095009 2974151 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 10:46:19.103467 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 10:46:19.112970 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 10:46:19.121805 2974151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 10:46:19.131086 2974151 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 10:46:19.139081 2974151 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 10:46:19.146487 2974151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 10:46:19.293215 2974151 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 10:46:19.434655 2974151 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 10:46:19.434715 2974151 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 10:46:19.439246 2974151 start.go:564] Will wait 60s for crictl version
	I1217 10:46:19.439314 2974151 ssh_runner.go:195] Run: which crictl
	I1217 10:46:19.442915 2974151 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 10:46:19.467445 2974151 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 10:46:19.467506 2974151 ssh_runner.go:195] Run: containerd --version
	I1217 10:46:19.489544 2974151 ssh_runner.go:195] Run: containerd --version
	I1217 10:46:19.516185 2974151 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1217 10:46:19.519114 2974151 cli_runner.go:164] Run: docker network inspect functional-232588 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 10:46:19.535732 2974151 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 10:46:19.542843 2974151 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1217 10:46:19.545647 2974151 kubeadm.go:884] updating cluster {Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 10:46:19.545821 2974151 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 10:46:19.545902 2974151 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 10:46:19.570156 2974151 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 10:46:19.570167 2974151 containerd.go:534] Images already preloaded, skipping extraction
	I1217 10:46:19.570223 2974151 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 10:46:19.598013 2974151 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 10:46:19.598025 2974151 cache_images.go:86] Images are preloaded, skipping loading
	I1217 10:46:19.598031 2974151 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1217 10:46:19.598133 2974151 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-232588 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 10:46:19.598195 2974151 ssh_runner.go:195] Run: sudo crictl info
	I1217 10:46:19.628150 2974151 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1217 10:46:19.628169 2974151 cni.go:84] Creating CNI manager for ""
	I1217 10:46:19.628176 2974151 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 10:46:19.628184 2974151 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 10:46:19.628205 2974151 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-232588 NodeName:functional-232588 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubelet
ConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 10:46:19.628313 2974151 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-232588"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 10:46:19.628380 2974151 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 10:46:19.636242 2974151 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 10:46:19.636301 2974151 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 10:46:19.643919 2974151 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1217 10:46:19.658022 2974151 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 10:46:19.670961 2974151 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2085 bytes)
	I1217 10:46:19.684065 2974151 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 10:46:19.687947 2974151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 10:46:19.796384 2974151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 10:46:20.002745 2974151 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588 for IP: 192.168.49.2
	I1217 10:46:20.002759 2974151 certs.go:195] generating shared ca certs ...
	I1217 10:46:20.002799 2974151 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:46:20.002998 2974151 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 10:46:20.003055 2974151 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 10:46:20.003062 2974151 certs.go:257] generating profile certs ...
	I1217 10:46:20.003183 2974151 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.key
	I1217 10:46:20.003236 2974151 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key.a39919a0
	I1217 10:46:20.003288 2974151 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key
	I1217 10:46:20.003444 2974151 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 10:46:20.003480 2974151 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 10:46:20.003508 2974151 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 10:46:20.003545 2974151 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 10:46:20.003577 2974151 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 10:46:20.003610 2974151 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 10:46:20.003665 2974151 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 10:46:20.004449 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 10:46:20.040127 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 10:46:20.065442 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 10:46:20.086611 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 10:46:20.107054 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 10:46:20.126007 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 10:46:20.144078 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 10:46:20.162802 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 10:46:20.181368 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 10:46:20.200073 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 10:46:20.217945 2974151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 10:46:20.235640 2974151 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 10:46:20.248545 2974151 ssh_runner.go:195] Run: openssl version
	I1217 10:46:20.256076 2974151 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 10:46:20.263759 2974151 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 10:46:20.271126 2974151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 10:46:20.274974 2974151 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 10:46:20.275038 2974151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 10:46:20.316429 2974151 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 10:46:20.323945 2974151 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 10:46:20.331201 2974151 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 10:46:20.339536 2974151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 10:46:20.343551 2974151 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 10:46:20.343606 2974151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 10:46:20.384485 2974151 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 10:46:20.391694 2974151 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:46:20.399044 2974151 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 10:46:20.406332 2974151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:46:20.410078 2974151 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:46:20.410134 2974151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 10:46:20.451203 2974151 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 10:46:20.458641 2974151 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 10:46:20.462247 2974151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 10:46:20.503114 2974151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 10:46:20.544335 2974151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 10:46:20.590045 2974151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 10:46:20.630985 2974151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 10:46:20.672580 2974151 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 10:46:20.713547 2974151 kubeadm.go:401] StartCluster: {Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:46:20.713638 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 10:46:20.713707 2974151 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 10:46:20.740007 2974151 cri.go:89] found id: ""
	I1217 10:46:20.740065 2974151 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 10:46:20.747914 2974151 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 10:46:20.747924 2974151 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 10:46:20.747974 2974151 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 10:46:20.757908 2974151 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 10:46:20.758430 2974151 kubeconfig.go:125] found "functional-232588" server: "https://192.168.49.2:8441"
	I1217 10:46:20.761036 2974151 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 10:46:20.769414 2974151 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 10:31:46.081162571 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 10:46:19.676908670 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1217 10:46:20.769441 2974151 kubeadm.go:1161] stopping kube-system containers ...
	I1217 10:46:20.769455 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1217 10:46:20.769528 2974151 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 10:46:20.801226 2974151 cri.go:89] found id: ""
	I1217 10:46:20.801308 2974151 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 10:46:20.820664 2974151 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 10:46:20.829373 2974151 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 17 10:35 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 17 10:35 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 17 10:35 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 17 10:35 /etc/kubernetes/scheduler.conf
	
	I1217 10:46:20.829433 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 10:46:20.837325 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 10:46:20.845308 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 10:46:20.845363 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 10:46:20.853199 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 10:46:20.860841 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 10:46:20.860897 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 10:46:20.868346 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 10:46:20.876151 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 10:46:20.876211 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 10:46:20.883945 2974151 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 10:46:20.892018 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 10:46:20.938748 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 10:46:22.162130 2974151 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.22335875s)
	I1217 10:46:22.162221 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 10:46:22.359829 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 10:46:22.415930 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 10:46:22.468185 2974151 api_server.go:52] waiting for apiserver process to appear ...
	I1217 10:46:22.468265 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:22.969146 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:23.468479 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:23.968514 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:24.468479 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:24.969355 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:25.469200 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:25.969018 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:26.468818 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:26.969109 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:27.468378 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:27.969311 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:28.469065 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:28.969101 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:29.468403 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:29.968443 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:30.468499 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:30.968729 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:31.468355 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:31.968496 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:32.468560 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:32.968509 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:33.469088 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:33.969160 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:34.468498 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:34.968497 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:35.468823 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:35.968410 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:36.469195 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:36.969040 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:37.469267 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:37.969122 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:38.469239 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:38.969263 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:39.469144 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:39.969429 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:40.468520 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:40.968559 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:41.469268 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:41.968407 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:42.469044 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:42.969148 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:43.468399 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:43.968478 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:44.468402 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:44.969211 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:45.469415 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:45.968355 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:46.468347 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:46.969243 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:47.468650 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:47.969320 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:48.469355 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:48.969346 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:49.469299 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:49.968561 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:50.469414 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:50.968570 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:51.468468 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:51.969383 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:52.468402 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:52.969191 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:53.469310 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:53.969186 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:54.469057 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:54.968491 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:55.469204 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:55.968499 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:56.468579 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:56.968537 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:57.468523 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:57.968481 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:58.468521 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:58.969320 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:59.469211 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:46:59.968498 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:00.468441 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:00.969123 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:01.468956 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:01.969376 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:02.468446 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:02.969237 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:03.468449 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:03.969079 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:04.469054 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:04.968610 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:05.468502 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:05.968334 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:06.469020 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:06.969077 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:07.469052 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:07.968481 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:08.469171 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:08.968586 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:09.469235 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:09.968478 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:10.469198 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:10.968403 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:11.469192 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:11.969439 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:12.469344 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:12.969231 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:13.469196 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:13.969169 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:14.469322 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:14.969138 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:15.469310 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:15.969247 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:16.469080 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:16.968869 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:17.468522 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:17.968551 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:18.468369 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:18.969356 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:19.469354 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:19.969205 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:20.469085 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:20.968997 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:21.468670 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:21.969358 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:22.469259 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:22.469337 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:22.493873 2974151 cri.go:89] found id: ""
	I1217 10:47:22.493887 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.493894 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:22.493901 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:22.493960 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:22.522462 2974151 cri.go:89] found id: ""
	I1217 10:47:22.522476 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.522483 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:22.522488 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:22.522547 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:22.550878 2974151 cri.go:89] found id: ""
	I1217 10:47:22.550892 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.550899 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:22.550904 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:22.550964 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:22.576167 2974151 cri.go:89] found id: ""
	I1217 10:47:22.576181 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.576188 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:22.576193 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:22.576253 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:22.600591 2974151 cri.go:89] found id: ""
	I1217 10:47:22.600605 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.600612 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:22.600617 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:22.600673 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:22.624978 2974151 cri.go:89] found id: ""
	I1217 10:47:22.624992 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.624999 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:22.625005 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:22.625062 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:22.649387 2974151 cri.go:89] found id: ""
	I1217 10:47:22.649401 2974151 logs.go:282] 0 containers: []
	W1217 10:47:22.649408 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:22.649415 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:22.649427 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:22.666544 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:22.666563 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:22.733635 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:22.724930   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.725595   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.727257   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.727857   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.729508   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:22.724930   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.725595   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.727257   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.727857   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:22.729508   10771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:22.733647 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:22.733658 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:22.802118 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:22.802139 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:22.842645 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:22.842661 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:25.403296 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:25.413370 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:25.413431 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:25.437778 2974151 cri.go:89] found id: ""
	I1217 10:47:25.437792 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.437799 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:25.437804 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:25.437864 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:25.466932 2974151 cri.go:89] found id: ""
	I1217 10:47:25.466946 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.466953 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:25.466959 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:25.467017 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:25.495887 2974151 cri.go:89] found id: ""
	I1217 10:47:25.495901 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.495907 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:25.495912 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:25.495971 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:25.521061 2974151 cri.go:89] found id: ""
	I1217 10:47:25.521075 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.521082 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:25.521087 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:25.521146 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:25.550884 2974151 cri.go:89] found id: ""
	I1217 10:47:25.550898 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.550905 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:25.550910 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:25.550967 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:25.576130 2974151 cri.go:89] found id: ""
	I1217 10:47:25.576145 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.576151 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:25.576156 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:25.576224 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:25.600903 2974151 cri.go:89] found id: ""
	I1217 10:47:25.600916 2974151 logs.go:282] 0 containers: []
	W1217 10:47:25.600923 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:25.600931 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:25.600941 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:25.633359 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:25.633375 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:25.689492 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:25.689512 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:25.706643 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:25.706661 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:25.788195 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:25.780730   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.781147   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.782587   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.782886   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.784365   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:25.780730   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.781147   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.782587   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.782886   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:25.784365   10888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:25.788207 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:25.788218 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:28.357987 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:28.368310 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:28.368371 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:28.393766 2974151 cri.go:89] found id: ""
	I1217 10:47:28.393789 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.393797 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:28.393803 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:28.393876 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:28.418225 2974151 cri.go:89] found id: ""
	I1217 10:47:28.418240 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.418247 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:28.418253 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:28.418312 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:28.444064 2974151 cri.go:89] found id: ""
	I1217 10:47:28.444083 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.444091 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:28.444096 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:28.444157 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:28.469125 2974151 cri.go:89] found id: ""
	I1217 10:47:28.469139 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.469146 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:28.469152 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:28.469210 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:28.494598 2974151 cri.go:89] found id: ""
	I1217 10:47:28.494614 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.494621 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:28.494627 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:28.494689 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:28.529767 2974151 cri.go:89] found id: ""
	I1217 10:47:28.529781 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.529788 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:28.529793 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:28.529851 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:28.554626 2974151 cri.go:89] found id: ""
	I1217 10:47:28.554640 2974151 logs.go:282] 0 containers: []
	W1217 10:47:28.554653 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:28.554661 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:28.554671 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:28.610665 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:28.610693 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:28.627829 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:28.627846 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:28.694227 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:28.685909   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.686688   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.688310   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.688904   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.690427   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:28.685909   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.686688   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.688310   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.688904   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:28.690427   10982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:28.694247 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:28.694257 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:28.761980 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:28.761999 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:31.299127 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:31.309358 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:31.309418 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:31.334436 2974151 cri.go:89] found id: ""
	I1217 10:47:31.334450 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.334458 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:31.334463 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:31.334530 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:31.359180 2974151 cri.go:89] found id: ""
	I1217 10:47:31.359195 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.359202 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:31.359207 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:31.359264 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:31.386298 2974151 cri.go:89] found id: ""
	I1217 10:47:31.386312 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.386319 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:31.386324 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:31.386385 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:31.414747 2974151 cri.go:89] found id: ""
	I1217 10:47:31.414762 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.414769 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:31.414774 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:31.414835 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:31.439979 2974151 cri.go:89] found id: ""
	I1217 10:47:31.439993 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.439999 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:31.440005 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:31.440061 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:31.465613 2974151 cri.go:89] found id: ""
	I1217 10:47:31.465628 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.465635 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:31.465641 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:31.465698 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:31.495303 2974151 cri.go:89] found id: ""
	I1217 10:47:31.495317 2974151 logs.go:282] 0 containers: []
	W1217 10:47:31.495324 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:31.495332 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:31.495347 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:31.551359 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:31.551380 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:31.568339 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:31.568356 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:31.631156 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:31.622217   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.623260   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.624240   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.625368   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.626068   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:31.622217   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.623260   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.624240   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.625368   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:31.626068   11088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:31.631168 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:31.631179 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:31.694344 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:31.694364 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:34.224306 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:34.234549 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:34.234609 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:34.262893 2974151 cri.go:89] found id: ""
	I1217 10:47:34.262907 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.262913 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:34.262919 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:34.262974 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:34.287865 2974151 cri.go:89] found id: ""
	I1217 10:47:34.287880 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.287887 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:34.287892 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:34.287971 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:34.314130 2974151 cri.go:89] found id: ""
	I1217 10:47:34.314144 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.314151 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:34.314157 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:34.314213 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:34.338080 2974151 cri.go:89] found id: ""
	I1217 10:47:34.338094 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.338101 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:34.338106 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:34.338167 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:34.366907 2974151 cri.go:89] found id: ""
	I1217 10:47:34.366922 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.366929 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:34.366934 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:34.367005 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:34.394628 2974151 cri.go:89] found id: ""
	I1217 10:47:34.394642 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.394650 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:34.394655 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:34.394718 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:34.422575 2974151 cri.go:89] found id: ""
	I1217 10:47:34.422590 2974151 logs.go:282] 0 containers: []
	W1217 10:47:34.422597 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:34.422605 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:34.422615 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:34.478427 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:34.478445 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:34.495399 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:34.495416 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:34.567591 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:34.559443   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.560218   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.561959   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.562370   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.563927   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:34.559443   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.560218   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.561959   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.562370   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:34.563927   11193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:34.567600 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:34.567611 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:34.629987 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:34.630008 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:37.172568 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:37.185167 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:37.185227 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:37.209648 2974151 cri.go:89] found id: ""
	I1217 10:47:37.209662 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.209669 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:37.209674 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:37.209734 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:37.239202 2974151 cri.go:89] found id: ""
	I1217 10:47:37.239216 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.239223 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:37.239229 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:37.239287 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:37.264777 2974151 cri.go:89] found id: ""
	I1217 10:47:37.264791 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.264798 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:37.264803 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:37.264870 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:37.290195 2974151 cri.go:89] found id: ""
	I1217 10:47:37.290209 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.290216 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:37.290221 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:37.290277 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:37.315019 2974151 cri.go:89] found id: ""
	I1217 10:47:37.315033 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.315040 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:37.315046 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:37.315116 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:37.339319 2974151 cri.go:89] found id: ""
	I1217 10:47:37.339333 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.339340 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:37.339345 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:37.339407 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:37.365996 2974151 cri.go:89] found id: ""
	I1217 10:47:37.366010 2974151 logs.go:282] 0 containers: []
	W1217 10:47:37.366017 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:37.366024 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:37.366034 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:37.382805 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:37.382824 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:37.447944 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:37.439827   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.440553   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.442220   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.442682   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.444195   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:37.439827   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.440553   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.442220   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.442682   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:37.444195   11294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:37.447955 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:37.447966 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:37.510276 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:37.510298 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:37.540200 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:37.540215 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:40.105556 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:40.119775 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:40.119860 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:40.144817 2974151 cri.go:89] found id: ""
	I1217 10:47:40.144832 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.144839 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:40.144844 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:40.144908 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:40.169663 2974151 cri.go:89] found id: ""
	I1217 10:47:40.169676 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.169683 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:40.169688 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:40.169745 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:40.194821 2974151 cri.go:89] found id: ""
	I1217 10:47:40.194835 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.194842 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:40.194847 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:40.194909 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:40.222839 2974151 cri.go:89] found id: ""
	I1217 10:47:40.222853 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.222860 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:40.222866 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:40.222940 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:40.247991 2974151 cri.go:89] found id: ""
	I1217 10:47:40.248005 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.248012 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:40.248017 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:40.248075 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:40.272758 2974151 cri.go:89] found id: ""
	I1217 10:47:40.272772 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.272778 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:40.272783 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:40.272844 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:40.298276 2974151 cri.go:89] found id: ""
	I1217 10:47:40.298290 2974151 logs.go:282] 0 containers: []
	W1217 10:47:40.298297 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:40.298305 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:40.298316 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:40.314934 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:40.314950 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:40.379519 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:40.371790   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.372215   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.373688   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.374125   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.375622   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:40.371790   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.372215   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.373688   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.374125   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:40.375622   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:40.379532 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:40.379544 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:40.442308 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:40.442328 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:40.471269 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:40.471287 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:43.030145 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:43.043645 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:43.043715 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:43.081236 2974151 cri.go:89] found id: ""
	I1217 10:47:43.081250 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.081257 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:43.081262 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:43.081326 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:43.115370 2974151 cri.go:89] found id: ""
	I1217 10:47:43.115384 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.115390 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:43.115399 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:43.115462 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:43.140373 2974151 cri.go:89] found id: ""
	I1217 10:47:43.140387 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.140395 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:43.140400 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:43.140480 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:43.166855 2974151 cri.go:89] found id: ""
	I1217 10:47:43.166870 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.166877 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:43.166883 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:43.166941 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:43.191839 2974151 cri.go:89] found id: ""
	I1217 10:47:43.191854 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.191861 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:43.191866 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:43.191927 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:43.217632 2974151 cri.go:89] found id: ""
	I1217 10:47:43.217652 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.217659 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:43.217664 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:43.217725 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:43.242042 2974151 cri.go:89] found id: ""
	I1217 10:47:43.242056 2974151 logs.go:282] 0 containers: []
	W1217 10:47:43.242064 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:43.242071 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:43.242081 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:43.299602 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:43.299621 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:43.316995 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:43.317012 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:43.381195 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:43.373241   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.374026   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.375639   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.375964   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.377408   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:43.373241   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.374026   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.375639   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.375964   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:43.377408   11512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:43.381206 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:43.381217 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:43.443981 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:43.444003 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:45.975295 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:45.985580 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:45.985639 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:46.020412 2974151 cri.go:89] found id: ""
	I1217 10:47:46.020446 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.020454 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:46.020460 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:46.020529 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:46.056724 2974151 cri.go:89] found id: ""
	I1217 10:47:46.056739 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.056755 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:46.056762 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:46.056823 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:46.087796 2974151 cri.go:89] found id: ""
	I1217 10:47:46.087811 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.087818 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:46.087844 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:46.087924 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:46.112453 2974151 cri.go:89] found id: ""
	I1217 10:47:46.112467 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.112475 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:46.112480 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:46.112539 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:46.141019 2974151 cri.go:89] found id: ""
	I1217 10:47:46.141034 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.141041 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:46.141047 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:46.141103 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:46.165608 2974151 cri.go:89] found id: ""
	I1217 10:47:46.165621 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.165628 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:46.165634 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:46.165691 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:46.192283 2974151 cri.go:89] found id: ""
	I1217 10:47:46.192307 2974151 logs.go:282] 0 containers: []
	W1217 10:47:46.192315 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:46.192323 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:46.192335 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:46.255412 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:46.255435 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:46.287390 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:46.287406 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:46.344424 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:46.344442 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:46.361344 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:46.361361 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:46.424398 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:46.416182   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.416923   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.418495   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.418798   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.420304   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:46.416182   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.416923   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.418495   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.418798   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:46.420304   11630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:48.924647 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:48.934813 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:48.934877 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:48.959135 2974151 cri.go:89] found id: ""
	I1217 10:47:48.959159 2974151 logs.go:282] 0 containers: []
	W1217 10:47:48.959166 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:48.959172 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:48.959241 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:48.983610 2974151 cri.go:89] found id: ""
	I1217 10:47:48.983632 2974151 logs.go:282] 0 containers: []
	W1217 10:47:48.983640 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:48.983645 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:48.983714 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:49.026685 2974151 cri.go:89] found id: ""
	I1217 10:47:49.026700 2974151 logs.go:282] 0 containers: []
	W1217 10:47:49.026707 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:49.026713 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:49.026773 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:49.060861 2974151 cri.go:89] found id: ""
	I1217 10:47:49.060876 2974151 logs.go:282] 0 containers: []
	W1217 10:47:49.060883 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:49.060890 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:49.060950 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:49.090198 2974151 cri.go:89] found id: ""
	I1217 10:47:49.090213 2974151 logs.go:282] 0 containers: []
	W1217 10:47:49.090221 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:49.090226 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:49.090288 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:49.119661 2974151 cri.go:89] found id: ""
	I1217 10:47:49.119676 2974151 logs.go:282] 0 containers: []
	W1217 10:47:49.119683 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:49.119689 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:49.119812 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:49.148486 2974151 cri.go:89] found id: ""
	I1217 10:47:49.148500 2974151 logs.go:282] 0 containers: []
	W1217 10:47:49.148507 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:49.148515 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:49.148525 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:49.212250 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:49.212271 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:49.240975 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:49.240993 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:49.299733 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:49.299756 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:49.316863 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:49.316882 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:49.387132 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:49.378625   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.379410   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.381103   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.381692   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.383302   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:49.378625   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.379410   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.381103   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.381692   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:49.383302   11733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:51.888132 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:51.898751 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:51.898816 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:51.932795 2974151 cri.go:89] found id: ""
	I1217 10:47:51.932815 2974151 logs.go:282] 0 containers: []
	W1217 10:47:51.932827 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:51.932833 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:51.932896 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:51.963357 2974151 cri.go:89] found id: ""
	I1217 10:47:51.963371 2974151 logs.go:282] 0 containers: []
	W1217 10:47:51.963378 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:51.963384 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:51.963448 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:51.988757 2974151 cri.go:89] found id: ""
	I1217 10:47:51.988778 2974151 logs.go:282] 0 containers: []
	W1217 10:47:51.988785 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:51.988790 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:51.988850 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:52.028153 2974151 cri.go:89] found id: ""
	I1217 10:47:52.028167 2974151 logs.go:282] 0 containers: []
	W1217 10:47:52.028174 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:52.028180 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:52.028244 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:52.063954 2974151 cri.go:89] found id: ""
	I1217 10:47:52.063968 2974151 logs.go:282] 0 containers: []
	W1217 10:47:52.063975 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:52.063980 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:52.064038 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:52.098500 2974151 cri.go:89] found id: ""
	I1217 10:47:52.098514 2974151 logs.go:282] 0 containers: []
	W1217 10:47:52.098521 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:52.098527 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:52.098587 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:52.130345 2974151 cri.go:89] found id: ""
	I1217 10:47:52.130359 2974151 logs.go:282] 0 containers: []
	W1217 10:47:52.130366 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:52.130374 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:52.130384 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:52.189106 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:52.189126 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:52.207475 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:52.207493 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:52.271884 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:52.263636   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.264410   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.265990   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.266498   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.267978   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:52.263636   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.264410   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.265990   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.266498   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:52.267978   11823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:52.271903 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:52.271914 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:52.334484 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:52.334504 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:54.867624 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:54.877729 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:54.877789 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:54.902223 2974151 cri.go:89] found id: ""
	I1217 10:47:54.902237 2974151 logs.go:282] 0 containers: []
	W1217 10:47:54.902244 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:54.902250 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:54.902312 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:54.927795 2974151 cri.go:89] found id: ""
	I1217 10:47:54.927810 2974151 logs.go:282] 0 containers: []
	W1217 10:47:54.927817 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:54.927823 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:54.927888 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:54.954800 2974151 cri.go:89] found id: ""
	I1217 10:47:54.954816 2974151 logs.go:282] 0 containers: []
	W1217 10:47:54.954823 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:54.954829 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:54.954888 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:54.980005 2974151 cri.go:89] found id: ""
	I1217 10:47:54.980018 2974151 logs.go:282] 0 containers: []
	W1217 10:47:54.980025 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:54.980030 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:54.980093 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:55.013092 2974151 cri.go:89] found id: ""
	I1217 10:47:55.013107 2974151 logs.go:282] 0 containers: []
	W1217 10:47:55.013115 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:55.013121 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:55.013191 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:55.050531 2974151 cri.go:89] found id: ""
	I1217 10:47:55.050545 2974151 logs.go:282] 0 containers: []
	W1217 10:47:55.050552 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:55.050557 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:55.050619 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:55.090230 2974151 cri.go:89] found id: ""
	I1217 10:47:55.090245 2974151 logs.go:282] 0 containers: []
	W1217 10:47:55.090252 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:55.090260 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:55.090270 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:55.153444 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:55.153464 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:47:55.185504 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:55.185520 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:55.242466 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:55.242485 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:55.260631 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:55.260648 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:55.331030 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:55.322930   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.323475   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.324828   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.325446   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.327095   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:55.322930   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.323475   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.324828   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.325446   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:55.327095   11940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:57.831262 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:47:57.841170 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:47:57.841234 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:47:57.869513 2974151 cri.go:89] found id: ""
	I1217 10:47:57.869529 2974151 logs.go:282] 0 containers: []
	W1217 10:47:57.869536 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:47:57.869542 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:47:57.869602 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:47:57.898410 2974151 cri.go:89] found id: ""
	I1217 10:47:57.898424 2974151 logs.go:282] 0 containers: []
	W1217 10:47:57.898431 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:47:57.898437 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:47:57.898497 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:47:57.926916 2974151 cri.go:89] found id: ""
	I1217 10:47:57.926931 2974151 logs.go:282] 0 containers: []
	W1217 10:47:57.926938 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:47:57.926944 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:47:57.927008 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:47:57.956754 2974151 cri.go:89] found id: ""
	I1217 10:47:57.956768 2974151 logs.go:282] 0 containers: []
	W1217 10:47:57.956775 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:47:57.956780 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:47:57.956840 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:47:57.981614 2974151 cri.go:89] found id: ""
	I1217 10:47:57.981629 2974151 logs.go:282] 0 containers: []
	W1217 10:47:57.981636 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:47:57.981642 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:47:57.981701 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:47:58.021825 2974151 cri.go:89] found id: ""
	I1217 10:47:58.021839 2974151 logs.go:282] 0 containers: []
	W1217 10:47:58.021846 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:47:58.021852 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:47:58.021924 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:47:58.055082 2974151 cri.go:89] found id: ""
	I1217 10:47:58.055097 2974151 logs.go:282] 0 containers: []
	W1217 10:47:58.055104 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:47:58.055111 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:47:58.055120 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:47:58.117865 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:47:58.117887 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:47:58.136280 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:47:58.136297 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:47:58.204520 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:47:58.195962   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.196685   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.198489   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.199076   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.200656   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:47:58.195962   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.196685   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.198489   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.199076   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:47:58.200656   12035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:47:58.204540 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:47:58.204551 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:47:58.267689 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:47:58.267713 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:00.795803 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:00.807186 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:00.807252 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:00.833048 2974151 cri.go:89] found id: ""
	I1217 10:48:00.833062 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.833069 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:00.833074 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:00.833136 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:00.863311 2974151 cri.go:89] found id: ""
	I1217 10:48:00.863325 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.863332 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:00.863338 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:00.863398 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:00.887857 2974151 cri.go:89] found id: ""
	I1217 10:48:00.887871 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.887877 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:00.887883 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:00.887940 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:00.913735 2974151 cri.go:89] found id: ""
	I1217 10:48:00.913749 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.913756 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:00.913762 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:00.913824 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:00.938305 2974151 cri.go:89] found id: ""
	I1217 10:48:00.938319 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.938327 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:00.938333 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:00.938390 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:00.963900 2974151 cri.go:89] found id: ""
	I1217 10:48:00.963914 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.963920 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:00.963925 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:00.963985 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:00.990708 2974151 cri.go:89] found id: ""
	I1217 10:48:00.990722 2974151 logs.go:282] 0 containers: []
	W1217 10:48:00.990729 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:00.990737 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:00.990747 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:01.012006 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:01.012023 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:01.099675 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:01.089770   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.090990   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.092688   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.093302   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.095197   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:01.089770   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.090990   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.092688   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.093302   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:01.095197   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:01.099686 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:01.099702 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:01.164360 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:01.164381 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:01.194518 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:01.194535 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:03.752593 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:03.763233 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:03.763297 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:03.788873 2974151 cri.go:89] found id: ""
	I1217 10:48:03.788893 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.788901 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:03.788907 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:03.788968 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:03.818571 2974151 cri.go:89] found id: ""
	I1217 10:48:03.818586 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.818593 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:03.818598 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:03.818657 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:03.844383 2974151 cri.go:89] found id: ""
	I1217 10:48:03.844397 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.844405 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:03.844410 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:03.844496 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:03.869318 2974151 cri.go:89] found id: ""
	I1217 10:48:03.869333 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.869339 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:03.869345 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:03.869404 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:03.895029 2974151 cri.go:89] found id: ""
	I1217 10:48:03.895043 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.895050 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:03.895055 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:03.895113 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:03.920493 2974151 cri.go:89] found id: ""
	I1217 10:48:03.920509 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.920516 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:03.920522 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:03.920592 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:03.945885 2974151 cri.go:89] found id: ""
	I1217 10:48:03.945898 2974151 logs.go:282] 0 containers: []
	W1217 10:48:03.945905 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:03.945912 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:03.945922 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:04.003008 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:04.003033 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:04.026399 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:04.026416 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:04.107334 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:04.098549   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.099321   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.101190   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.101779   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.103317   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:04.098549   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.099321   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.101190   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.101779   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:04.103317   12244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:04.107349 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:04.107360 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:04.174915 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:04.174940 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:06.707611 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:06.718250 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:06.718313 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:06.743084 2974151 cri.go:89] found id: ""
	I1217 10:48:06.743098 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.743105 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:06.743110 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:06.743169 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:06.769923 2974151 cri.go:89] found id: ""
	I1217 10:48:06.769937 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.769945 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:06.769950 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:06.770016 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:06.798634 2974151 cri.go:89] found id: ""
	I1217 10:48:06.798648 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.798655 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:06.798660 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:06.798719 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:06.823901 2974151 cri.go:89] found id: ""
	I1217 10:48:06.823915 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.823923 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:06.823928 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:06.823990 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:06.849872 2974151 cri.go:89] found id: ""
	I1217 10:48:06.849885 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.849892 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:06.849898 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:06.849957 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:06.875558 2974151 cri.go:89] found id: ""
	I1217 10:48:06.875572 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.875580 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:06.875585 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:06.875642 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:06.901051 2974151 cri.go:89] found id: ""
	I1217 10:48:06.901065 2974151 logs.go:282] 0 containers: []
	W1217 10:48:06.901071 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:06.901079 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:06.901088 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:06.964468 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:06.964488 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:06.993527 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:06.993542 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:07.062199 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:07.062218 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:07.082316 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:07.082334 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:07.157387 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:07.148299   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.149171   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.150892   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.151645   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.153293   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:07.148299   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.149171   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.150892   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.151645   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:07.153293   12368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:09.657640 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:09.667724 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:09.667783 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:09.693919 2974151 cri.go:89] found id: ""
	I1217 10:48:09.693935 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.693941 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:09.693948 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:09.694008 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:09.722743 2974151 cri.go:89] found id: ""
	I1217 10:48:09.722758 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.722765 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:09.722770 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:09.722828 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:09.756610 2974151 cri.go:89] found id: ""
	I1217 10:48:09.756624 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.756632 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:09.756637 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:09.756693 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:09.786006 2974151 cri.go:89] found id: ""
	I1217 10:48:09.786021 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.786028 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:09.786033 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:09.786097 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:09.810865 2974151 cri.go:89] found id: ""
	I1217 10:48:09.810878 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.810885 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:09.810890 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:09.810947 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:09.838221 2974151 cri.go:89] found id: ""
	I1217 10:48:09.838235 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.838242 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:09.838247 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:09.838307 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:09.866748 2974151 cri.go:89] found id: ""
	I1217 10:48:09.866762 2974151 logs.go:282] 0 containers: []
	W1217 10:48:09.866769 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:09.866776 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:09.866786 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:09.929554 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:09.929576 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:09.959017 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:09.959032 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:10.017246 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:10.017265 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:10.036170 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:10.036188 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:10.112138 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:10.102458   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.103256   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.105102   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.105527   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.107946   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:10.102458   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.103256   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.105102   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.105527   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:10.107946   12475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:12.612434 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:12.622568 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:12.622628 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:12.650041 2974151 cri.go:89] found id: ""
	I1217 10:48:12.650061 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.650069 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:12.650074 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:12.650134 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:12.674422 2974151 cri.go:89] found id: ""
	I1217 10:48:12.674437 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.674444 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:12.674450 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:12.674509 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:12.703294 2974151 cri.go:89] found id: ""
	I1217 10:48:12.703308 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.703315 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:12.703320 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:12.703378 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:12.727986 2974151 cri.go:89] found id: ""
	I1217 10:48:12.728006 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.728013 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:12.728019 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:12.728078 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:12.753787 2974151 cri.go:89] found id: ""
	I1217 10:48:12.753800 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.753807 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:12.753812 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:12.753869 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:12.779807 2974151 cri.go:89] found id: ""
	I1217 10:48:12.779831 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.779838 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:12.779844 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:12.779904 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:12.806196 2974151 cri.go:89] found id: ""
	I1217 10:48:12.806211 2974151 logs.go:282] 0 containers: []
	W1217 10:48:12.806219 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:12.806227 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:12.806237 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:12.862792 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:12.862812 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:12.879906 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:12.879923 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:12.944306 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:12.935386   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.935978   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.937685   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.938348   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.940016   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:12.935386   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.935978   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.937685   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.938348   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:12.940016   12558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:12.944316 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:12.944327 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:13.006787 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:13.006812 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:15.546753 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:15.557080 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:15.557147 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:15.582296 2974151 cri.go:89] found id: ""
	I1217 10:48:15.582309 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.582316 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:15.582321 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:15.582378 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:15.609992 2974151 cri.go:89] found id: ""
	I1217 10:48:15.610006 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.610013 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:15.610018 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:15.610075 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:15.635702 2974151 cri.go:89] found id: ""
	I1217 10:48:15.635716 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.635723 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:15.635728 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:15.635788 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:15.661568 2974151 cri.go:89] found id: ""
	I1217 10:48:15.661582 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.661589 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:15.661595 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:15.661652 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:15.691028 2974151 cri.go:89] found id: ""
	I1217 10:48:15.691042 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.691049 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:15.691056 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:15.691114 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:15.715986 2974151 cri.go:89] found id: ""
	I1217 10:48:15.716009 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.716018 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:15.716023 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:15.716088 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:15.742377 2974151 cri.go:89] found id: ""
	I1217 10:48:15.742391 2974151 logs.go:282] 0 containers: []
	W1217 10:48:15.742398 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:15.742406 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:15.742417 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:15.759230 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:15.759248 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:15.824478 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:15.816058   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.816539   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.818127   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.818799   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.820350   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:15.816058   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.816539   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.818127   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.818799   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:15.820350   12661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:15.824490 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:15.824502 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:15.892784 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:15.892804 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:15.921547 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:15.921562 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:18.478009 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:18.488179 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:18.488242 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:18.511813 2974151 cri.go:89] found id: ""
	I1217 10:48:18.511827 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.511843 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:18.511850 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:18.511929 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:18.535876 2974151 cri.go:89] found id: ""
	I1217 10:48:18.535890 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.535897 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:18.535902 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:18.535957 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:18.560498 2974151 cri.go:89] found id: ""
	I1217 10:48:18.560512 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.560521 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:18.560526 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:18.560588 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:18.585005 2974151 cri.go:89] found id: ""
	I1217 10:48:18.585018 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.585025 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:18.585030 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:18.585087 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:18.609132 2974151 cri.go:89] found id: ""
	I1217 10:48:18.609146 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.609153 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:18.609158 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:18.609215 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:18.640158 2974151 cri.go:89] found id: ""
	I1217 10:48:18.640172 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.640187 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:18.640194 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:18.640266 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:18.669845 2974151 cri.go:89] found id: ""
	I1217 10:48:18.669860 2974151 logs.go:282] 0 containers: []
	W1217 10:48:18.669867 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:18.669874 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:18.669884 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:18.726133 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:18.726154 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:18.743323 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:18.743341 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:18.807202 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:18.798544   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.799121   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.800756   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.801823   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.803369   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:18.798544   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.799121   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.800756   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.801823   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:18.803369   12770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:18.807212 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:18.807222 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:18.869437 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:18.869456 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:21.398466 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:21.408899 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:21.408973 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:21.433836 2974151 cri.go:89] found id: ""
	I1217 10:48:21.433851 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.433858 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:21.433863 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:21.433925 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:21.458440 2974151 cri.go:89] found id: ""
	I1217 10:48:21.458455 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.458462 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:21.458473 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:21.458531 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:21.482664 2974151 cri.go:89] found id: ""
	I1217 10:48:21.482678 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.482685 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:21.482690 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:21.482747 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:21.510499 2974151 cri.go:89] found id: ""
	I1217 10:48:21.510513 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.510520 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:21.510525 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:21.510583 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:21.541182 2974151 cri.go:89] found id: ""
	I1217 10:48:21.541196 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.541204 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:21.541210 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:21.541268 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:21.565692 2974151 cri.go:89] found id: ""
	I1217 10:48:21.565705 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.565717 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:21.565723 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:21.565781 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:21.589704 2974151 cri.go:89] found id: ""
	I1217 10:48:21.589718 2974151 logs.go:282] 0 containers: []
	W1217 10:48:21.589725 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:21.589733 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:21.589743 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:21.651127 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:21.642467   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.643175   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.644846   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.645439   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.647213   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:21.642467   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.643175   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.644846   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.645439   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:21.647213   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:21.651137 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:21.651153 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:21.714087 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:21.714110 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:21.743190 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:21.743205 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:21.803426 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:21.803446 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:24.321453 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:24.331883 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:24.331948 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:24.356312 2974151 cri.go:89] found id: ""
	I1217 10:48:24.356327 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.356334 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:24.356340 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:24.356398 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:24.382382 2974151 cri.go:89] found id: ""
	I1217 10:48:24.382395 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.382402 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:24.382407 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:24.382466 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:24.410304 2974151 cri.go:89] found id: ""
	I1217 10:48:24.410318 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.410325 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:24.410330 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:24.410387 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:24.434459 2974151 cri.go:89] found id: ""
	I1217 10:48:24.434474 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.434481 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:24.434486 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:24.434551 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:24.459866 2974151 cri.go:89] found id: ""
	I1217 10:48:24.459881 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.459888 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:24.459893 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:24.459989 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:24.486458 2974151 cri.go:89] found id: ""
	I1217 10:48:24.486471 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.486478 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:24.486484 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:24.486548 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:24.511349 2974151 cri.go:89] found id: ""
	I1217 10:48:24.511363 2974151 logs.go:282] 0 containers: []
	W1217 10:48:24.511372 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:24.511379 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:24.511390 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:24.575296 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:24.566670   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.567433   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.569106   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.569660   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.571348   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:24.566670   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.567433   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.569106   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.569660   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:24.571348   12972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:24.575314 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:24.575325 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:24.637043 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:24.637063 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:24.665459 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:24.665475 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:24.722699 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:24.722722 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:27.240739 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:27.252359 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:27.252432 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:27.280163 2974151 cri.go:89] found id: ""
	I1217 10:48:27.280177 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.280196 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:27.280201 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:27.280266 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:27.309589 2974151 cri.go:89] found id: ""
	I1217 10:48:27.309603 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.309622 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:27.309627 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:27.309692 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:27.337538 2974151 cri.go:89] found id: ""
	I1217 10:48:27.337552 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.337559 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:27.337564 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:27.337622 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:27.361942 2974151 cri.go:89] found id: ""
	I1217 10:48:27.361957 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.361965 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:27.361970 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:27.362029 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:27.390818 2974151 cri.go:89] found id: ""
	I1217 10:48:27.390832 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.390840 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:27.390845 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:27.390908 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:27.422856 2974151 cri.go:89] found id: ""
	I1217 10:48:27.422871 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.422878 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:27.422883 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:27.422943 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:27.448978 2974151 cri.go:89] found id: ""
	I1217 10:48:27.448992 2974151 logs.go:282] 0 containers: []
	W1217 10:48:27.448999 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:27.449007 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:27.449016 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:27.504505 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:27.504523 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:27.521306 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:27.521327 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:27.585173 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:27.576673   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.577398   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.579125   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.579750   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.581292   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:27.576673   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.577398   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.579125   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.579750   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:27.581292   13080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:27.585182 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:27.585193 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:27.646817 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:27.646836 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:30.175129 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:30.186313 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:30.186377 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:30.213450 2974151 cri.go:89] found id: ""
	I1217 10:48:30.213464 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.213471 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:30.213476 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:30.213541 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:30.240025 2974151 cri.go:89] found id: ""
	I1217 10:48:30.240039 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.240046 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:30.240051 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:30.240126 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:30.286752 2974151 cri.go:89] found id: ""
	I1217 10:48:30.286766 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.286774 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:30.286779 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:30.286858 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:30.317210 2974151 cri.go:89] found id: ""
	I1217 10:48:30.317232 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.317240 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:30.317245 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:30.317305 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:30.345461 2974151 cri.go:89] found id: ""
	I1217 10:48:30.345475 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.345482 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:30.345487 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:30.345546 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:30.375558 2974151 cri.go:89] found id: ""
	I1217 10:48:30.375576 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.375590 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:30.375595 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:30.375655 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:30.401652 2974151 cri.go:89] found id: ""
	I1217 10:48:30.401668 2974151 logs.go:282] 0 containers: []
	W1217 10:48:30.401675 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:30.401683 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:30.401693 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:30.462370 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:30.462393 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:30.480350 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:30.480366 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:30.545595 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:30.536885   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.537607   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.539274   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.539750   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.541271   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:30.536885   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.537607   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.539274   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.539750   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:30.541271   13184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:30.545607 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:30.545619 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:30.609333 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:30.609353 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:33.138648 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:33.149215 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:33.149282 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:33.174735 2974151 cri.go:89] found id: ""
	I1217 10:48:33.174755 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.174764 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:33.174769 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:33.174832 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:33.200480 2974151 cri.go:89] found id: ""
	I1217 10:48:33.200495 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.200502 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:33.200507 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:33.200567 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:33.230102 2974151 cri.go:89] found id: ""
	I1217 10:48:33.230117 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.230124 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:33.230129 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:33.230186 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:33.273250 2974151 cri.go:89] found id: ""
	I1217 10:48:33.273264 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.273271 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:33.273278 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:33.273336 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:33.304262 2974151 cri.go:89] found id: ""
	I1217 10:48:33.304276 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.304293 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:33.304299 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:33.304359 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:33.332160 2974151 cri.go:89] found id: ""
	I1217 10:48:33.332174 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.332181 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:33.332186 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:33.332247 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:33.357270 2974151 cri.go:89] found id: ""
	I1217 10:48:33.357284 2974151 logs.go:282] 0 containers: []
	W1217 10:48:33.357291 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:33.357299 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:33.357308 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:33.420730 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:33.420751 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:33.448992 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:33.449007 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:33.504960 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:33.504979 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:33.521896 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:33.521913 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:33.584222 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:33.575275   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.576061   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.577717   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.578259   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.580086   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:33.575275   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.576061   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.577717   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.578259   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:33.580086   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:36.084525 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:36.095613 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:36.095678 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:36.121922 2974151 cri.go:89] found id: ""
	I1217 10:48:36.121936 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.121944 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:36.121950 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:36.122009 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:36.150594 2974151 cri.go:89] found id: ""
	I1217 10:48:36.150608 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.150616 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:36.150621 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:36.150682 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:36.179197 2974151 cri.go:89] found id: ""
	I1217 10:48:36.179210 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.179218 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:36.179223 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:36.179283 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:36.203527 2974151 cri.go:89] found id: ""
	I1217 10:48:36.203541 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.203548 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:36.203553 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:36.203620 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:36.228332 2974151 cri.go:89] found id: ""
	I1217 10:48:36.228345 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.228352 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:36.228358 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:36.228456 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:36.262749 2974151 cri.go:89] found id: ""
	I1217 10:48:36.262763 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.262769 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:36.262774 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:36.262834 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:36.300340 2974151 cri.go:89] found id: ""
	I1217 10:48:36.300353 2974151 logs.go:282] 0 containers: []
	W1217 10:48:36.300363 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:36.300371 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:36.300380 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:36.358709 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:36.358729 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:36.375631 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:36.375649 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:36.440551 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:36.432145   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.432737   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.434406   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.434949   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.436697   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:36.432145   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.432737   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.434406   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.434949   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:36.436697   13391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:36.440560 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:36.440571 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:36.502941 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:36.502960 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:39.031727 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:39.042285 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:39.042350 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:39.068264 2974151 cri.go:89] found id: ""
	I1217 10:48:39.068278 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.068285 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:39.068291 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:39.068352 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:39.091732 2974151 cri.go:89] found id: ""
	I1217 10:48:39.091745 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.091752 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:39.091757 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:39.091815 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:39.118106 2974151 cri.go:89] found id: ""
	I1217 10:48:39.118119 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.118126 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:39.118133 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:39.118189 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:39.146834 2974151 cri.go:89] found id: ""
	I1217 10:48:39.146848 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.146856 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:39.146861 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:39.146919 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:39.175980 2974151 cri.go:89] found id: ""
	I1217 10:48:39.175994 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.176001 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:39.176006 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:39.176069 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:39.201501 2974151 cri.go:89] found id: ""
	I1217 10:48:39.201515 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.201522 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:39.201527 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:39.201582 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:39.226802 2974151 cri.go:89] found id: ""
	I1217 10:48:39.226816 2974151 logs.go:282] 0 containers: []
	W1217 10:48:39.226833 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:39.226841 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:39.226852 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:39.283913 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:39.283931 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:39.304511 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:39.304528 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:39.377031 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:39.368579   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.369282   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.370783   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.371295   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.372809   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:39.368579   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.369282   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.370783   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.371295   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:39.372809   13496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:39.377044 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:39.377059 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:39.440871 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:39.440891 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:41.970682 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:41.981109 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:41.981168 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:42.014790 2974151 cri.go:89] found id: ""
	I1217 10:48:42.014806 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.014813 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:42.014820 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:42.014890 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:42.044163 2974151 cri.go:89] found id: ""
	I1217 10:48:42.044177 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.044183 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:42.044188 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:42.044247 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:42.074548 2974151 cri.go:89] found id: ""
	I1217 10:48:42.074581 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.074595 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:42.074605 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:42.074707 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:42.108730 2974151 cri.go:89] found id: ""
	I1217 10:48:42.108755 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.108763 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:42.108769 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:42.108838 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:42.140974 2974151 cri.go:89] found id: ""
	I1217 10:48:42.140989 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.140997 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:42.141002 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:42.141075 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:42.185841 2974151 cri.go:89] found id: ""
	I1217 10:48:42.185857 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.185865 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:42.185871 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:42.185940 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:42.227621 2974151 cri.go:89] found id: ""
	I1217 10:48:42.227637 2974151 logs.go:282] 0 containers: []
	W1217 10:48:42.227645 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:42.227654 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:42.227664 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:42.293458 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:42.293479 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:42.316925 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:42.316945 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:42.388580 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:42.379787   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.380216   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.381959   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.382335   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.383960   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:42.379787   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.380216   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.381959   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.382335   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:42.383960   13601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:42.388600 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:42.388612 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:42.451727 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:42.451749 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:44.984590 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:44.995270 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:44.995356 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:45.061848 2974151 cri.go:89] found id: ""
	I1217 10:48:45.061864 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.061871 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:45.061878 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:45.061944 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:45.120144 2974151 cri.go:89] found id: ""
	I1217 10:48:45.120160 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.120168 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:45.120174 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:45.120245 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:45.160210 2974151 cri.go:89] found id: ""
	I1217 10:48:45.160226 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.160235 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:45.160240 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:45.160314 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:45.221796 2974151 cri.go:89] found id: ""
	I1217 10:48:45.221829 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.221858 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:45.221880 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:45.222024 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:45.304676 2974151 cri.go:89] found id: ""
	I1217 10:48:45.304703 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.304711 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:45.304717 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:45.304788 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:45.337767 2974151 cri.go:89] found id: ""
	I1217 10:48:45.337790 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.337798 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:45.337804 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:45.337871 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:45.373372 2974151 cri.go:89] found id: ""
	I1217 10:48:45.373387 2974151 logs.go:282] 0 containers: []
	W1217 10:48:45.373394 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:45.373402 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:45.373412 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:45.433269 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:45.433288 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:45.450287 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:45.450304 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:45.517643 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:45.508647   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.509270   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.511035   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.511639   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.513218   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:45.508647   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.509270   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.511035   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.511639   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:45.513218   13703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:45.517653 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:45.517665 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:45.581750 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:45.581771 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:48.117070 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:48.128197 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:48.128258 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:48.153434 2974151 cri.go:89] found id: ""
	I1217 10:48:48.153449 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.153455 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:48.153461 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:48.153520 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:48.178677 2974151 cri.go:89] found id: ""
	I1217 10:48:48.178691 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.178698 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:48.178703 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:48.178766 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:48.206864 2974151 cri.go:89] found id: ""
	I1217 10:48:48.206879 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.206886 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:48.206891 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:48.206957 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:48.231924 2974151 cri.go:89] found id: ""
	I1217 10:48:48.231938 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.231945 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:48.231950 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:48.232008 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:48.274705 2974151 cri.go:89] found id: ""
	I1217 10:48:48.274718 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.274726 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:48.274731 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:48.274790 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:48.303863 2974151 cri.go:89] found id: ""
	I1217 10:48:48.303877 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.303884 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:48.303889 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:48.303950 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:48.328838 2974151 cri.go:89] found id: ""
	I1217 10:48:48.328852 2974151 logs.go:282] 0 containers: []
	W1217 10:48:48.328859 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:48.328867 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:48.328878 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:48.389442 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:48.389462 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:48.406684 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:48.406700 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:48.472922 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:48.463986   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.464483   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.466011   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.466510   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.468051   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:48.463986   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.464483   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.466011   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.466510   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:48.468051   13808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:48.472932 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:48.472943 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:48.535655 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:48.535674 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:51.069071 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:51.081466 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:51.081531 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:51.111124 2974151 cri.go:89] found id: ""
	I1217 10:48:51.111139 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.111146 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:51.111152 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:51.111218 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:51.143791 2974151 cri.go:89] found id: ""
	I1217 10:48:51.143806 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.143813 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:51.143818 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:51.143881 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:51.169640 2974151 cri.go:89] found id: ""
	I1217 10:48:51.169655 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.169661 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:51.169666 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:51.169726 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:51.195027 2974151 cri.go:89] found id: ""
	I1217 10:48:51.195041 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.195048 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:51.195053 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:51.195115 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:51.219317 2974151 cri.go:89] found id: ""
	I1217 10:48:51.219330 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.219337 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:51.219342 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:51.219401 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:51.246522 2974151 cri.go:89] found id: ""
	I1217 10:48:51.246536 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.246543 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:51.246548 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:51.246606 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:51.277021 2974151 cri.go:89] found id: ""
	I1217 10:48:51.277047 2974151 logs.go:282] 0 containers: []
	W1217 10:48:51.277055 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:51.277064 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:51.277074 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:51.345341 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:51.345364 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:51.378677 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:51.378693 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:51.438850 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:51.438869 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:51.455900 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:51.455916 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:51.516892 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:51.508779   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.509483   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.510624   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.511147   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.512798   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:51.508779   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.509483   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.510624   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.511147   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:51.512798   13926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:54.017193 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:54.028476 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:54.028544 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:54.056997 2974151 cri.go:89] found id: ""
	I1217 10:48:54.057012 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.057019 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:54.057025 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:54.057086 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:54.083159 2974151 cri.go:89] found id: ""
	I1217 10:48:54.083175 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.083183 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:54.083189 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:54.083251 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:54.109519 2974151 cri.go:89] found id: ""
	I1217 10:48:54.109534 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.109549 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:54.109557 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:54.109624 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:54.134157 2974151 cri.go:89] found id: ""
	I1217 10:48:54.134171 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.134178 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:54.134183 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:54.134239 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:54.162788 2974151 cri.go:89] found id: ""
	I1217 10:48:54.162802 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.162819 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:54.162825 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:54.162894 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:54.189731 2974151 cri.go:89] found id: ""
	I1217 10:48:54.189749 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.189756 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:54.189762 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:54.189850 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:54.214954 2974151 cri.go:89] found id: ""
	I1217 10:48:54.214968 2974151 logs.go:282] 0 containers: []
	W1217 10:48:54.214975 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:54.214982 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:54.214992 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:54.232128 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:54.232145 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:54.332775 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:54.323643   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.324176   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.325741   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.326329   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.328065   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:54.323643   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.324176   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.325741   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.326329   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:54.328065   14008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:54.332784 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:54.332794 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:54.400873 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:54.400902 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:54.436837 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:54.436855 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:56.995650 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:57.014000 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:57.014068 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:57.039621 2974151 cri.go:89] found id: ""
	I1217 10:48:57.039635 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.039642 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:57.039647 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:57.039706 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:48:57.063811 2974151 cri.go:89] found id: ""
	I1217 10:48:57.063824 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.063832 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:48:57.063837 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:48:57.063901 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:48:57.089763 2974151 cri.go:89] found id: ""
	I1217 10:48:57.089777 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.089784 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:48:57.089789 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:48:57.089849 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:48:57.119137 2974151 cri.go:89] found id: ""
	I1217 10:48:57.119151 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.119157 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:48:57.119163 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:48:57.119222 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:48:57.145301 2974151 cri.go:89] found id: ""
	I1217 10:48:57.145317 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.145324 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:48:57.145330 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:48:57.145390 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:48:57.169967 2974151 cri.go:89] found id: ""
	I1217 10:48:57.169981 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.169989 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:48:57.169994 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:48:57.170055 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:48:57.199678 2974151 cri.go:89] found id: ""
	I1217 10:48:57.199693 2974151 logs.go:282] 0 containers: []
	W1217 10:48:57.199700 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:48:57.199708 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:48:57.199718 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:48:57.259994 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:48:57.260013 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:48:57.283244 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:48:57.283262 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:48:57.355664 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:48:57.347248   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.348013   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.349816   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.350323   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.351848   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:48:57.347248   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.348013   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.349816   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.350323   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:48:57.351848   14121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:48:57.355675 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:48:57.355686 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:48:57.418570 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:48:57.418593 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:48:59.953153 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:48:59.963676 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:48:59.963736 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:48:59.989636 2974151 cri.go:89] found id: ""
	I1217 10:48:59.989654 2974151 logs.go:282] 0 containers: []
	W1217 10:48:59.989662 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:48:59.989667 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:48:59.989734 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:00.158254 2974151 cri.go:89] found id: ""
	I1217 10:49:00.158276 2974151 logs.go:282] 0 containers: []
	W1217 10:49:00.158284 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:00.158290 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:00.158371 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:00.272664 2974151 cri.go:89] found id: ""
	I1217 10:49:00.272680 2974151 logs.go:282] 0 containers: []
	W1217 10:49:00.272687 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:00.272693 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:00.272790 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:00.329030 2974151 cri.go:89] found id: ""
	I1217 10:49:00.329045 2974151 logs.go:282] 0 containers: []
	W1217 10:49:00.329052 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:00.329058 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:00.329123 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:00.376045 2974151 cri.go:89] found id: ""
	I1217 10:49:00.376060 2974151 logs.go:282] 0 containers: []
	W1217 10:49:00.376068 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:00.376074 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:00.376141 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:00.406187 2974151 cri.go:89] found id: ""
	I1217 10:49:00.406202 2974151 logs.go:282] 0 containers: []
	W1217 10:49:00.406210 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:00.406216 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:00.406281 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:00.436523 2974151 cri.go:89] found id: ""
	I1217 10:49:00.436538 2974151 logs.go:282] 0 containers: []
	W1217 10:49:00.436546 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:00.436554 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:00.436575 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:00.504375 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:00.495726   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.496591   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.498206   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.498541   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.500005   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:00.495726   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.496591   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.498206   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.498541   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:00.500005   14220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:00.504450 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:00.504460 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:00.568543 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:00.568563 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:00.600756 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:00.600773 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:00.662114 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:00.662131 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:03.181138 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:03.191733 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:03.191796 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:03.220693 2974151 cri.go:89] found id: ""
	I1217 10:49:03.220707 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.220714 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:03.220719 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:03.220775 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:03.245346 2974151 cri.go:89] found id: ""
	I1217 10:49:03.245359 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.245366 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:03.245371 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:03.245434 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:03.283019 2974151 cri.go:89] found id: ""
	I1217 10:49:03.283034 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.283042 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:03.283072 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:03.283134 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:03.312584 2974151 cri.go:89] found id: ""
	I1217 10:49:03.312599 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.312605 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:03.312611 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:03.312670 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:03.337325 2974151 cri.go:89] found id: ""
	I1217 10:49:03.337340 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.337347 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:03.337352 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:03.337421 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:03.363072 2974151 cri.go:89] found id: ""
	I1217 10:49:03.363086 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.363093 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:03.363099 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:03.363156 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:03.388307 2974151 cri.go:89] found id: ""
	I1217 10:49:03.388321 2974151 logs.go:282] 0 containers: []
	W1217 10:49:03.388328 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:03.388336 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:03.388346 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:03.450591 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:03.450611 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:03.479831 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:03.479848 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:03.538921 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:03.538940 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:03.557193 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:03.557210 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:03.629818 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:03.620815   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.621960   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.622408   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.623908   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.624403   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:03.620815   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.621960   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.622408   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.623908   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:03.624403   14348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:06.130079 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:06.140562 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:06.140625 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:06.176078 2974151 cri.go:89] found id: ""
	I1217 10:49:06.176092 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.176100 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:06.176106 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:06.176165 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:06.201648 2974151 cri.go:89] found id: ""
	I1217 10:49:06.201669 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.201678 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:06.201683 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:06.201741 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:06.225531 2974151 cri.go:89] found id: ""
	I1217 10:49:06.225545 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.225552 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:06.225557 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:06.225615 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:06.252027 2974151 cri.go:89] found id: ""
	I1217 10:49:06.252042 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.252049 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:06.252056 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:06.252118 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:06.280340 2974151 cri.go:89] found id: ""
	I1217 10:49:06.280353 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.280361 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:06.280366 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:06.280449 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:06.313759 2974151 cri.go:89] found id: ""
	I1217 10:49:06.313773 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.313781 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:06.313786 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:06.313846 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:06.338616 2974151 cri.go:89] found id: ""
	I1217 10:49:06.338630 2974151 logs.go:282] 0 containers: []
	W1217 10:49:06.338638 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:06.338645 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:06.338655 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:06.394759 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:06.394784 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:06.412192 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:06.412208 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:06.475020 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:06.466865   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.467591   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.469274   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.469719   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.471184   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:06.466865   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.467591   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.469274   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.469719   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:06.471184   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:06.475030 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:06.475039 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:06.537503 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:06.537522 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:09.067381 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:09.078169 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:09.078242 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:09.102188 2974151 cri.go:89] found id: ""
	I1217 10:49:09.102202 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.102210 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:09.102215 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:09.102276 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:09.127428 2974151 cri.go:89] found id: ""
	I1217 10:49:09.127443 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.127457 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:09.127462 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:09.127523 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:09.155928 2974151 cri.go:89] found id: ""
	I1217 10:49:09.155943 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.155951 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:09.155956 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:09.156013 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:09.180962 2974151 cri.go:89] found id: ""
	I1217 10:49:09.180976 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.180983 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:09.180988 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:09.181047 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:09.206446 2974151 cri.go:89] found id: ""
	I1217 10:49:09.206459 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.206466 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:09.206471 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:09.206527 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:09.234163 2974151 cri.go:89] found id: ""
	I1217 10:49:09.234177 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.234184 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:09.234191 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:09.234248 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:09.266062 2974151 cri.go:89] found id: ""
	I1217 10:49:09.266076 2974151 logs.go:282] 0 containers: []
	W1217 10:49:09.266083 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:09.266091 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:09.266100 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:09.331047 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:09.331068 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:09.348066 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:09.348082 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:09.416466 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:09.408138   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.408821   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.410542   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.410884   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.412400   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:09.408138   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.408821   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.410542   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.410884   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:09.412400   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:09.416475 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:09.416488 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:09.477634 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:09.477656 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:12.006559 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:12.017999 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:12.018064 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:12.043667 2974151 cri.go:89] found id: ""
	I1217 10:49:12.043681 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.043689 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:12.043694 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:12.043755 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:12.067975 2974151 cri.go:89] found id: ""
	I1217 10:49:12.068000 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.068008 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:12.068013 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:12.068082 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:12.093913 2974151 cri.go:89] found id: ""
	I1217 10:49:12.093936 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.093944 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:12.093950 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:12.094011 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:12.123009 2974151 cri.go:89] found id: ""
	I1217 10:49:12.123022 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.123029 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:12.123046 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:12.123121 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:12.152263 2974151 cri.go:89] found id: ""
	I1217 10:49:12.152277 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.152284 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:12.152299 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:12.152357 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:12.178500 2974151 cri.go:89] found id: ""
	I1217 10:49:12.178514 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.178521 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:12.178527 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:12.178601 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:12.203660 2974151 cri.go:89] found id: ""
	I1217 10:49:12.203674 2974151 logs.go:282] 0 containers: []
	W1217 10:49:12.203692 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:12.203700 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:12.203711 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:12.261019 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:12.261039 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:12.279774 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:12.279790 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:12.350172 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:12.342156   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.342650   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.344118   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.344659   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.346217   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:12.342156   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.342650   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.344118   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.344659   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:12.346217   14651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:12.350182 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:12.350192 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:12.412715 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:12.412734 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:14.942372 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:14.953073 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:14.953133 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:14.986889 2974151 cri.go:89] found id: ""
	I1217 10:49:14.986903 2974151 logs.go:282] 0 containers: []
	W1217 10:49:14.986910 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:14.986916 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:14.987012 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:15.024956 2974151 cri.go:89] found id: ""
	I1217 10:49:15.024972 2974151 logs.go:282] 0 containers: []
	W1217 10:49:15.024980 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:15.024986 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:15.025062 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:15.055135 2974151 cri.go:89] found id: ""
	I1217 10:49:15.055159 2974151 logs.go:282] 0 containers: []
	W1217 10:49:15.055170 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:15.055175 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:15.055244 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:15.083268 2974151 cri.go:89] found id: ""
	I1217 10:49:15.083283 2974151 logs.go:282] 0 containers: []
	W1217 10:49:15.083310 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:15.083316 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:15.083386 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:15.110734 2974151 cri.go:89] found id: ""
	I1217 10:49:15.110750 2974151 logs.go:282] 0 containers: []
	W1217 10:49:15.110757 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:15.110764 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:15.110825 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:15.140854 2974151 cri.go:89] found id: ""
	I1217 10:49:15.140869 2974151 logs.go:282] 0 containers: []
	W1217 10:49:15.140876 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:15.140881 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:15.140981 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:15.167259 2974151 cri.go:89] found id: ""
	I1217 10:49:15.167273 2974151 logs.go:282] 0 containers: []
	W1217 10:49:15.167280 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:15.167288 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:15.167298 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:15.224081 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:15.224100 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:15.241661 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:15.241679 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:15.322485 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:15.313320   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.313943   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.316017   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.316658   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.318128   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:15.313320   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.313943   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.316017   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.316658   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:15.318128   14751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:15.322495 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:15.322517 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:15.385975 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:15.385996 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:17.915565 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:17.925558 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:17.925619 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:17.950881 2974151 cri.go:89] found id: ""
	I1217 10:49:17.950895 2974151 logs.go:282] 0 containers: []
	W1217 10:49:17.950902 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:17.950907 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:17.950964 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:17.975955 2974151 cri.go:89] found id: ""
	I1217 10:49:17.975969 2974151 logs.go:282] 0 containers: []
	W1217 10:49:17.975975 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:17.975980 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:17.976039 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:18.004484 2974151 cri.go:89] found id: ""
	I1217 10:49:18.004503 2974151 logs.go:282] 0 containers: []
	W1217 10:49:18.004512 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:18.004517 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:18.004597 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:18.031679 2974151 cri.go:89] found id: ""
	I1217 10:49:18.031694 2974151 logs.go:282] 0 containers: []
	W1217 10:49:18.031702 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:18.031708 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:18.031775 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:18.059398 2974151 cri.go:89] found id: ""
	I1217 10:49:18.059412 2974151 logs.go:282] 0 containers: []
	W1217 10:49:18.059436 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:18.059443 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:18.059504 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:18.085330 2974151 cri.go:89] found id: ""
	I1217 10:49:18.085344 2974151 logs.go:282] 0 containers: []
	W1217 10:49:18.085352 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:18.085357 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:18.085420 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:18.114569 2974151 cri.go:89] found id: ""
	I1217 10:49:18.114585 2974151 logs.go:282] 0 containers: []
	W1217 10:49:18.114592 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:18.114600 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:18.114611 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:18.178110 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:18.169772   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.170633   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.172208   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.172731   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.174231   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:18.169772   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.170633   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.172208   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.172731   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:18.174231   14849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:18.178122 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:18.178132 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:18.241410 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:18.241434 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:18.273882 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:18.273898 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:18.334306 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:18.334324 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:20.852121 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:20.862188 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:20.862248 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:20.886819 2974151 cri.go:89] found id: ""
	I1217 10:49:20.886834 2974151 logs.go:282] 0 containers: []
	W1217 10:49:20.886850 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:20.886857 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:20.886930 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:20.913071 2974151 cri.go:89] found id: ""
	I1217 10:49:20.913086 2974151 logs.go:282] 0 containers: []
	W1217 10:49:20.913093 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:20.913098 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:20.913157 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:20.937301 2974151 cri.go:89] found id: ""
	I1217 10:49:20.937315 2974151 logs.go:282] 0 containers: []
	W1217 10:49:20.937322 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:20.937327 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:20.937386 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:20.966247 2974151 cri.go:89] found id: ""
	I1217 10:49:20.966260 2974151 logs.go:282] 0 containers: []
	W1217 10:49:20.966267 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:20.966272 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:20.966328 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:20.991713 2974151 cri.go:89] found id: ""
	I1217 10:49:20.991727 2974151 logs.go:282] 0 containers: []
	W1217 10:49:20.991734 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:20.991739 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:20.991796 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:21.017813 2974151 cri.go:89] found id: ""
	I1217 10:49:21.017828 2974151 logs.go:282] 0 containers: []
	W1217 10:49:21.017835 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:21.017841 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:21.017901 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:21.047576 2974151 cri.go:89] found id: ""
	I1217 10:49:21.047590 2974151 logs.go:282] 0 containers: []
	W1217 10:49:21.047598 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:21.047605 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:21.047615 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:21.109681 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:21.109707 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:21.127095 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:21.127114 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:21.192482 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:21.184199   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.184777   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.186485   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.186953   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.188551   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:21.184199   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.184777   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.186485   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.186953   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:21.188551   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:21.192493 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:21.192504 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:21.256363 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:21.256383 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:23.824987 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:23.835117 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:23.835179 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:23.860953 2974151 cri.go:89] found id: ""
	I1217 10:49:23.860966 2974151 logs.go:282] 0 containers: []
	W1217 10:49:23.860973 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:23.860979 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:23.861036 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:23.894776 2974151 cri.go:89] found id: ""
	I1217 10:49:23.894790 2974151 logs.go:282] 0 containers: []
	W1217 10:49:23.894797 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:23.894802 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:23.894863 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:23.923645 2974151 cri.go:89] found id: ""
	I1217 10:49:23.923660 2974151 logs.go:282] 0 containers: []
	W1217 10:49:23.923667 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:23.923678 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:23.923735 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:23.950354 2974151 cri.go:89] found id: ""
	I1217 10:49:23.950368 2974151 logs.go:282] 0 containers: []
	W1217 10:49:23.950374 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:23.950380 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:23.950437 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:23.974645 2974151 cri.go:89] found id: ""
	I1217 10:49:23.974659 2974151 logs.go:282] 0 containers: []
	W1217 10:49:23.974666 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:23.974671 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:23.974732 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:24.000121 2974151 cri.go:89] found id: ""
	I1217 10:49:24.000149 2974151 logs.go:282] 0 containers: []
	W1217 10:49:24.000157 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:24.000163 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:24.000242 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:24.034475 2974151 cri.go:89] found id: ""
	I1217 10:49:24.034489 2974151 logs.go:282] 0 containers: []
	W1217 10:49:24.034497 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:24.034505 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:24.034514 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:24.099963 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:24.099984 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:24.136430 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:24.136447 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:24.192589 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:24.192651 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:24.209690 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:24.209707 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:24.292778 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:24.284539   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.285387   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.287069   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.287380   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.288843   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:24.284539   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.285387   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.287069   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.287380   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:24.288843   15075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:26.793038 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:26.803569 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:26.803630 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:26.829202 2974151 cri.go:89] found id: ""
	I1217 10:49:26.829215 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.829222 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:26.829227 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:26.829285 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:26.855339 2974151 cri.go:89] found id: ""
	I1217 10:49:26.855353 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.855359 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:26.855365 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:26.855434 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:26.882145 2974151 cri.go:89] found id: ""
	I1217 10:49:26.882160 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.882168 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:26.882174 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:26.882231 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:26.906912 2974151 cri.go:89] found id: ""
	I1217 10:49:26.906925 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.906932 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:26.906937 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:26.906994 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:26.931691 2974151 cri.go:89] found id: ""
	I1217 10:49:26.931714 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.931722 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:26.931732 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:26.931798 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:26.957483 2974151 cri.go:89] found id: ""
	I1217 10:49:26.957497 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.957504 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:26.957510 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:26.957570 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:26.981546 2974151 cri.go:89] found id: ""
	I1217 10:49:26.981560 2974151 logs.go:282] 0 containers: []
	W1217 10:49:26.981567 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:26.981574 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:26.981584 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:27.038884 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:27.038905 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:27.059063 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:27.059079 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:27.122721 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:27.114006   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.114575   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.116274   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.117079   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.118797   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:27.114006   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.114575   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.116274   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.117079   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:27.118797   15165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:27.122731 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:27.122741 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:27.188207 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:27.188227 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:29.720397 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:29.731016 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:29.731089 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:29.759816 2974151 cri.go:89] found id: ""
	I1217 10:49:29.759836 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.759843 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:29.759848 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:29.759909 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:29.784725 2974151 cri.go:89] found id: ""
	I1217 10:49:29.784739 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.784747 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:29.784752 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:29.784813 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:29.810710 2974151 cri.go:89] found id: ""
	I1217 10:49:29.810724 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.810731 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:29.810736 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:29.810796 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:29.835166 2974151 cri.go:89] found id: ""
	I1217 10:49:29.835180 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.835187 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:29.835196 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:29.835255 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:29.862724 2974151 cri.go:89] found id: ""
	I1217 10:49:29.862738 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.862745 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:29.862750 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:29.862814 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:29.887572 2974151 cri.go:89] found id: ""
	I1217 10:49:29.887590 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.887597 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:29.887608 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:29.887676 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:29.911679 2974151 cri.go:89] found id: ""
	I1217 10:49:29.911693 2974151 logs.go:282] 0 containers: []
	W1217 10:49:29.911700 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:29.911708 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:29.911717 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:29.974573 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:29.974595 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:30.028175 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:30.028195 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:30.102876 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:30.102898 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:30.120802 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:30.120826 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:30.191763 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:30.183313   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.184024   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.185583   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.186151   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.187552   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:30.183313   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.184024   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.185583   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.186151   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:30.187552   15287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:32.692593 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:32.703024 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:32.703087 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:32.733277 2974151 cri.go:89] found id: ""
	I1217 10:49:32.733302 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.733310 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:32.733317 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:32.733384 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:32.763219 2974151 cri.go:89] found id: ""
	I1217 10:49:32.763234 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.763241 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:32.763246 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:32.763304 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:32.793128 2974151 cri.go:89] found id: ""
	I1217 10:49:32.793143 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.793150 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:32.793155 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:32.793213 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:32.824178 2974151 cri.go:89] found id: ""
	I1217 10:49:32.824194 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.824201 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:32.824206 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:32.824271 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:32.854145 2974151 cri.go:89] found id: ""
	I1217 10:49:32.854170 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.854178 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:32.854183 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:32.854251 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:32.879767 2974151 cri.go:89] found id: ""
	I1217 10:49:32.879797 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.879804 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:32.879809 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:32.879899 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:32.909819 2974151 cri.go:89] found id: ""
	I1217 10:49:32.909833 2974151 logs.go:282] 0 containers: []
	W1217 10:49:32.909842 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:32.909849 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:32.909859 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:32.938841 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:32.938857 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:32.995133 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:32.995156 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:33.014953 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:33.014974 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:33.085045 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:33.075667   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.076471   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.078226   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.078820   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.080383   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:33.075667   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.076471   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.078226   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.078820   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:33.080383   15392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:33.085054 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:33.085065 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:35.651037 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:35.661187 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:35.661246 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:35.687255 2974151 cri.go:89] found id: ""
	I1217 10:49:35.687270 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.687277 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:35.687282 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:35.687340 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:35.713953 2974151 cri.go:89] found id: ""
	I1217 10:49:35.713967 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.713974 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:35.713980 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:35.714040 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:35.742852 2974151 cri.go:89] found id: ""
	I1217 10:49:35.742866 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.742874 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:35.742879 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:35.742937 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:35.768219 2974151 cri.go:89] found id: ""
	I1217 10:49:35.768233 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.768240 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:35.768246 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:35.768314 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:35.792498 2974151 cri.go:89] found id: ""
	I1217 10:49:35.792512 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.792519 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:35.792524 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:35.792583 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:35.818063 2974151 cri.go:89] found id: ""
	I1217 10:49:35.818077 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.818084 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:35.818089 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:35.818147 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:35.843090 2974151 cri.go:89] found id: ""
	I1217 10:49:35.843105 2974151 logs.go:282] 0 containers: []
	W1217 10:49:35.843111 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:35.843119 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:35.843129 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:35.899655 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:35.899673 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:35.916834 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:35.916850 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:35.982052 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:35.973406   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.974102   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.975751   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.976284   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.977956   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:35.973406   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.974102   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.975751   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.976284   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:35.977956   15486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:35.982062 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:35.982075 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:36.049729 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:36.049750 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:38.582447 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:38.592471 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:38.592528 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:38.617757 2974151 cri.go:89] found id: ""
	I1217 10:49:38.617772 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.617779 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:38.617786 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:38.617845 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:38.647228 2974151 cri.go:89] found id: ""
	I1217 10:49:38.647242 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.647249 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:38.647254 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:38.647312 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:38.672309 2974151 cri.go:89] found id: ""
	I1217 10:49:38.672324 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.672331 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:38.672336 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:38.672395 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:38.699575 2974151 cri.go:89] found id: ""
	I1217 10:49:38.699590 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.699597 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:38.699603 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:38.699660 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:38.729276 2974151 cri.go:89] found id: ""
	I1217 10:49:38.729290 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.729297 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:38.729303 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:38.729361 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:38.757110 2974151 cri.go:89] found id: ""
	I1217 10:49:38.757124 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.757131 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:38.757137 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:38.757197 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:38.783523 2974151 cri.go:89] found id: ""
	I1217 10:49:38.783537 2974151 logs.go:282] 0 containers: []
	W1217 10:49:38.783544 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:38.783551 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:38.783562 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:38.854691 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:38.846060   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.846723   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.848354   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.849037   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.850802   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:38.846060   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.846723   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.848354   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.849037   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:38.850802   15585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:38.854701 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:38.854713 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:38.918821 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:38.918843 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:38.947201 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:38.947217 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:39.004566 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:39.004587 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:41.522977 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:41.536227 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:41.536288 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:41.566436 2974151 cri.go:89] found id: ""
	I1217 10:49:41.566451 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.566458 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:41.566466 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:41.566527 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:41.599863 2974151 cri.go:89] found id: ""
	I1217 10:49:41.599879 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.599886 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:41.599892 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:41.599956 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:41.631187 2974151 cri.go:89] found id: ""
	I1217 10:49:41.631202 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.631209 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:41.631216 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:41.631274 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:41.658402 2974151 cri.go:89] found id: ""
	I1217 10:49:41.658416 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.658423 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:41.658428 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:41.658487 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:41.686724 2974151 cri.go:89] found id: ""
	I1217 10:49:41.686738 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.686745 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:41.686751 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:41.686809 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:41.721194 2974151 cri.go:89] found id: ""
	I1217 10:49:41.721208 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.721215 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:41.721220 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:41.721279 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:41.750295 2974151 cri.go:89] found id: ""
	I1217 10:49:41.750309 2974151 logs.go:282] 0 containers: []
	W1217 10:49:41.750316 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:41.750323 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:41.750334 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:41.779389 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:41.779406 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:41.837692 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:41.837715 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:41.854830 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:41.854847 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:41.919451 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:41.911491   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.912035   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.913552   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.914095   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.915570   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:41.911491   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.912035   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.913552   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.914095   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:41.915570   15708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:41.919461 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:41.919470 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:44.482271 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:44.492656 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:44.492720 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:44.530744 2974151 cri.go:89] found id: ""
	I1217 10:49:44.530758 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.530765 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:44.530770 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:44.530831 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:44.556602 2974151 cri.go:89] found id: ""
	I1217 10:49:44.556616 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.556624 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:44.556629 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:44.556687 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:44.582820 2974151 cri.go:89] found id: ""
	I1217 10:49:44.582835 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.582842 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:44.582847 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:44.582906 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:44.607152 2974151 cri.go:89] found id: ""
	I1217 10:49:44.607166 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.607173 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:44.607184 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:44.607244 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:44.634565 2974151 cri.go:89] found id: ""
	I1217 10:49:44.634579 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.634587 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:44.634592 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:44.634662 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:44.661979 2974151 cri.go:89] found id: ""
	I1217 10:49:44.661993 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.662000 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:44.662005 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:44.662066 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:44.686675 2974151 cri.go:89] found id: ""
	I1217 10:49:44.686697 2974151 logs.go:282] 0 containers: []
	W1217 10:49:44.686705 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:44.686713 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:44.686722 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:44.743011 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:44.743033 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:44.759816 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:44.759833 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:44.824819 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:44.816544   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.817205   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.818745   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.819310   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.820870   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:44.816544   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.817205   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.818745   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.819310   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:44.820870   15803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:44.824830 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:44.824841 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:44.890788 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:44.890807 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:47.418865 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:47.429392 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:47.429467 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:47.454629 2974151 cri.go:89] found id: ""
	I1217 10:49:47.454643 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.454650 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:47.454655 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:47.454766 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:47.480876 2974151 cri.go:89] found id: ""
	I1217 10:49:47.480890 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.480897 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:47.480902 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:47.480970 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:47.512027 2974151 cri.go:89] found id: ""
	I1217 10:49:47.512041 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.512054 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:47.512060 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:47.512120 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:47.539586 2974151 cri.go:89] found id: ""
	I1217 10:49:47.539600 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.539608 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:47.539613 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:47.539671 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:47.566423 2974151 cri.go:89] found id: ""
	I1217 10:49:47.566437 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.566444 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:47.566450 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:47.566507 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:47.592329 2974151 cri.go:89] found id: ""
	I1217 10:49:47.592343 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.592350 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:47.592355 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:47.592442 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:47.617999 2974151 cri.go:89] found id: ""
	I1217 10:49:47.618013 2974151 logs.go:282] 0 containers: []
	W1217 10:49:47.618020 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:47.618028 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:47.618037 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:47.678218 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:47.678240 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:47.695642 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:47.695659 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:47.762123 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:47.753095   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.754063   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.755748   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.756187   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.757812   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:47.753095   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.754063   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.755748   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.756187   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:47.757812   15909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:47.762133 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:47.762146 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:47.828387 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:47.828408 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:50.363629 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:50.373970 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:50.374026 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:50.398664 2974151 cri.go:89] found id: ""
	I1217 10:49:50.398678 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.398685 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:50.398690 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:50.398749 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:50.424119 2974151 cri.go:89] found id: ""
	I1217 10:49:50.424132 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.424139 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:50.424144 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:50.424203 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:50.450501 2974151 cri.go:89] found id: ""
	I1217 10:49:50.450516 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.450523 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:50.450529 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:50.450591 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:50.479279 2974151 cri.go:89] found id: ""
	I1217 10:49:50.479330 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.479338 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:50.479344 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:50.479402 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:50.514044 2974151 cri.go:89] found id: ""
	I1217 10:49:50.514058 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.514065 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:50.514070 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:50.514147 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:50.550857 2974151 cri.go:89] found id: ""
	I1217 10:49:50.550871 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.550878 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:50.550883 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:50.550943 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:50.586702 2974151 cri.go:89] found id: ""
	I1217 10:49:50.586716 2974151 logs.go:282] 0 containers: []
	W1217 10:49:50.586724 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:50.586731 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:50.586740 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:50.649317 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:50.649338 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:50.681689 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:50.681706 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:50.739069 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:50.739092 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:50.756760 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:50.756777 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:50.826240 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:50.816693   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.817339   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.819115   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.819743   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.821406   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:50.816693   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.817339   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.819115   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.819743   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:50.821406   16025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:53.327009 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:53.338042 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:53.338105 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:53.364395 2974151 cri.go:89] found id: ""
	I1217 10:49:53.364409 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.364437 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:53.364443 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:53.364504 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:53.391405 2974151 cri.go:89] found id: ""
	I1217 10:49:53.391418 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.391425 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:53.391435 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:53.391495 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:53.415894 2974151 cri.go:89] found id: ""
	I1217 10:49:53.415909 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.415916 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:53.415921 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:53.415987 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:53.441489 2974151 cri.go:89] found id: ""
	I1217 10:49:53.441505 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.441512 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:53.441518 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:53.441577 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:53.470465 2974151 cri.go:89] found id: ""
	I1217 10:49:53.470480 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.470487 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:53.470492 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:53.470580 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:53.496777 2974151 cri.go:89] found id: ""
	I1217 10:49:53.496791 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.496798 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:53.496804 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:53.496862 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:53.522462 2974151 cri.go:89] found id: ""
	I1217 10:49:53.522477 2974151 logs.go:282] 0 containers: []
	W1217 10:49:53.522484 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:53.522492 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:53.522503 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:53.587962 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:53.587981 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:53.605021 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:53.605038 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:53.674653 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:53.666595   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.667148   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.668629   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.669055   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.670469   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:53.666595   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.667148   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.668629   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.669055   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:53.670469   16118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:53.674671 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:53.674682 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:53.736888 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:53.736908 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:56.264574 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:56.274948 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:56.275019 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:56.306086 2974151 cri.go:89] found id: ""
	I1217 10:49:56.306108 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.306116 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:56.306122 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:56.306189 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:56.331503 2974151 cri.go:89] found id: ""
	I1217 10:49:56.331517 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.331524 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:56.331529 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:56.331588 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:56.357713 2974151 cri.go:89] found id: ""
	I1217 10:49:56.357727 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.357734 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:56.357740 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:56.357804 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:56.386307 2974151 cri.go:89] found id: ""
	I1217 10:49:56.386322 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.386329 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:56.386335 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:56.386392 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:56.411103 2974151 cri.go:89] found id: ""
	I1217 10:49:56.411116 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.411148 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:56.411154 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:56.411210 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:56.438603 2974151 cri.go:89] found id: ""
	I1217 10:49:56.438617 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.438632 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:56.438638 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:56.438700 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:56.463485 2974151 cri.go:89] found id: ""
	I1217 10:49:56.463499 2974151 logs.go:282] 0 containers: []
	W1217 10:49:56.463506 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:56.463513 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:56.463526 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:56.480151 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:56.480170 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:56.564122 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:56.555873   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.556612   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.558127   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.558422   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.559904   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:56.555873   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.556612   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.558127   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.558422   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:56.559904   16213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:56.564133 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:56.564152 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:56.631606 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:56.631625 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:56.658603 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:56.658621 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:49:59.216557 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:49:59.226542 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:49:59.226605 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:49:59.250485 2974151 cri.go:89] found id: ""
	I1217 10:49:59.250501 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.250522 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:49:59.250528 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:49:59.250597 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:49:59.275922 2974151 cri.go:89] found id: ""
	I1217 10:49:59.275936 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.275945 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:49:59.275960 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:49:59.276021 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:49:59.305346 2974151 cri.go:89] found id: ""
	I1217 10:49:59.305372 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.305380 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:49:59.305386 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:49:59.305454 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:49:59.329784 2974151 cri.go:89] found id: ""
	I1217 10:49:59.329799 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.329806 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:49:59.329812 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:49:59.329870 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:49:59.353939 2974151 cri.go:89] found id: ""
	I1217 10:49:59.353953 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.353961 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:49:59.353968 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:49:59.354030 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:49:59.379444 2974151 cri.go:89] found id: ""
	I1217 10:49:59.379458 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.379465 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:49:59.379471 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:49:59.379535 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:49:59.404346 2974151 cri.go:89] found id: ""
	I1217 10:49:59.404360 2974151 logs.go:282] 0 containers: []
	W1217 10:49:59.404367 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:49:59.404374 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:49:59.404385 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:49:59.421191 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:49:59.421209 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:49:59.484153 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:49:59.476366   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.477052   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.478594   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.478902   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.480341   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:49:59.476366   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.477052   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.478594   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.478902   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:49:59.480341   16317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:49:59.484164 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:49:59.484177 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:49:59.553474 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:49:59.553493 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:49:59.587183 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:49:59.587199 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:02.144181 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:02.155199 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:02.155292 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:02.188757 2974151 cri.go:89] found id: ""
	I1217 10:50:02.188773 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.188780 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:02.188785 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:02.188851 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:02.219315 2974151 cri.go:89] found id: ""
	I1217 10:50:02.219330 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.219337 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:02.219342 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:02.219406 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:02.244595 2974151 cri.go:89] found id: ""
	I1217 10:50:02.244609 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.244616 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:02.244622 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:02.244684 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:02.270632 2974151 cri.go:89] found id: ""
	I1217 10:50:02.270647 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.270654 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:02.270659 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:02.270718 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:02.296393 2974151 cri.go:89] found id: ""
	I1217 10:50:02.296407 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.296447 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:02.296454 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:02.296521 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:02.326837 2974151 cri.go:89] found id: ""
	I1217 10:50:02.326851 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.326859 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:02.326868 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:02.326931 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:02.356502 2974151 cri.go:89] found id: ""
	I1217 10:50:02.356517 2974151 logs.go:282] 0 containers: []
	W1217 10:50:02.356527 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:02.356536 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:02.356548 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:02.434224 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:02.417603   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.418251   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.426024   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.428283   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.428822   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:02.417603   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.418251   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.426024   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.428283   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:02.428822   16417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:02.434234 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:02.434244 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:02.502034 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:02.502055 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:02.541286 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:02.541303 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:02.606116 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:02.606137 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:05.125496 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:05.136157 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:05.136217 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:05.160937 2974151 cri.go:89] found id: ""
	I1217 10:50:05.160952 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.160959 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:05.160964 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:05.161024 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:05.185873 2974151 cri.go:89] found id: ""
	I1217 10:50:05.185887 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.185894 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:05.185900 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:05.185999 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:05.212646 2974151 cri.go:89] found id: ""
	I1217 10:50:05.212676 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.212684 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:05.212690 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:05.212767 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:05.238323 2974151 cri.go:89] found id: ""
	I1217 10:50:05.238340 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.238347 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:05.238353 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:05.238414 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:05.263764 2974151 cri.go:89] found id: ""
	I1217 10:50:05.263779 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.263786 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:05.263792 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:05.263849 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:05.289054 2974151 cri.go:89] found id: ""
	I1217 10:50:05.289069 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.289076 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:05.289081 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:05.289144 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:05.314515 2974151 cri.go:89] found id: ""
	I1217 10:50:05.314530 2974151 logs.go:282] 0 containers: []
	W1217 10:50:05.314538 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:05.314546 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:05.314556 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:05.380980 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:05.381002 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:05.414207 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:05.414222 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:05.472281 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:05.472301 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:05.489358 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:05.489375 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:05.571554 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:05.562906   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.563808   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.565527   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.566129   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.567151   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:05.562906   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.563808   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.565527   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.566129   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:05.567151   16536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:08.071830 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:08.082387 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:08.082462 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:08.110539 2974151 cri.go:89] found id: ""
	I1217 10:50:08.110553 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.110561 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:08.110566 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:08.110629 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:08.135732 2974151 cri.go:89] found id: ""
	I1217 10:50:08.135746 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.135754 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:08.135760 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:08.135828 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:08.162274 2974151 cri.go:89] found id: ""
	I1217 10:50:08.162289 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.162296 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:08.162302 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:08.162359 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:08.187522 2974151 cri.go:89] found id: ""
	I1217 10:50:08.187536 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.187543 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:08.187549 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:08.187618 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:08.212868 2974151 cri.go:89] found id: ""
	I1217 10:50:08.212883 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.212890 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:08.212896 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:08.212958 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:08.236894 2974151 cri.go:89] found id: ""
	I1217 10:50:08.236908 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.236915 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:08.236921 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:08.236981 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:08.262293 2974151 cri.go:89] found id: ""
	I1217 10:50:08.262308 2974151 logs.go:282] 0 containers: []
	W1217 10:50:08.262315 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:08.262322 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:08.262332 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:08.320099 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:08.320118 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:08.337595 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:08.337611 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:08.404535 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:08.395902   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.396655   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.398294   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.398971   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.400705   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:08.395902   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.396655   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.398294   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.398971   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:08.400705   16629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:08.404545 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:08.404557 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:08.467318 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:08.467338 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:11.014160 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:11.025076 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:11.025146 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:11.050236 2974151 cri.go:89] found id: ""
	I1217 10:50:11.050252 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.050260 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:11.050265 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:11.050329 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:11.081289 2974151 cri.go:89] found id: ""
	I1217 10:50:11.081311 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.081318 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:11.081324 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:11.081385 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:11.111117 2974151 cri.go:89] found id: ""
	I1217 10:50:11.111134 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.111141 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:11.111146 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:11.111209 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:11.137886 2974151 cri.go:89] found id: ""
	I1217 10:50:11.137900 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.137908 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:11.137913 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:11.137972 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:11.164080 2974151 cri.go:89] found id: ""
	I1217 10:50:11.164096 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.164104 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:11.164119 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:11.164183 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:11.194241 2974151 cri.go:89] found id: ""
	I1217 10:50:11.194256 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.194264 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:11.194269 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:11.194331 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:11.220644 2974151 cri.go:89] found id: ""
	I1217 10:50:11.220659 2974151 logs.go:282] 0 containers: []
	W1217 10:50:11.220666 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:11.220673 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:11.220687 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:11.283052 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:11.283070 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:11.310700 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:11.310717 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:11.366749 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:11.366769 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:11.383957 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:11.383975 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:11.451001 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:11.442629   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.443048   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.444733   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.445416   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.447157   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:11.442629   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.443048   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.444733   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.445416   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:11.447157   16749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:13.952741 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:13.962784 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:13.962846 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:13.987247 2974151 cri.go:89] found id: ""
	I1217 10:50:13.987262 2974151 logs.go:282] 0 containers: []
	W1217 10:50:13.987269 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:13.987274 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:13.987340 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:14.012962 2974151 cri.go:89] found id: ""
	I1217 10:50:14.012977 2974151 logs.go:282] 0 containers: []
	W1217 10:50:14.012984 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:14.012990 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:14.013058 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:14.038181 2974151 cri.go:89] found id: ""
	I1217 10:50:14.038195 2974151 logs.go:282] 0 containers: []
	W1217 10:50:14.038203 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:14.038208 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:14.038266 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:14.062700 2974151 cri.go:89] found id: ""
	I1217 10:50:14.062715 2974151 logs.go:282] 0 containers: []
	W1217 10:50:14.062723 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:14.062728 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:14.062785 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:14.093364 2974151 cri.go:89] found id: ""
	I1217 10:50:14.093386 2974151 logs.go:282] 0 containers: []
	W1217 10:50:14.093393 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:14.093399 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:14.093457 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:14.118504 2974151 cri.go:89] found id: ""
	I1217 10:50:14.118519 2974151 logs.go:282] 0 containers: []
	W1217 10:50:14.118525 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:14.118531 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:14.118596 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:14.143182 2974151 cri.go:89] found id: ""
	I1217 10:50:14.143198 2974151 logs.go:282] 0 containers: []
	W1217 10:50:14.143204 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:14.143212 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:14.143223 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:14.201003 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:14.201024 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:14.218136 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:14.218153 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:14.291347 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:14.280379   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.281686   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.285094   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.285633   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.287421   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:14.280379   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.281686   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.285094   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.285633   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:14.287421   16840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:14.291358 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:14.291370 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:14.354518 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:14.354541 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:16.888907 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:16.899327 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:16.899396 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:16.924553 2974151 cri.go:89] found id: ""
	I1217 10:50:16.924572 2974151 logs.go:282] 0 containers: []
	W1217 10:50:16.924580 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:16.924586 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:16.924646 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:16.950729 2974151 cri.go:89] found id: ""
	I1217 10:50:16.950743 2974151 logs.go:282] 0 containers: []
	W1217 10:50:16.950750 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:16.950756 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:16.950811 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:16.978167 2974151 cri.go:89] found id: ""
	I1217 10:50:16.978181 2974151 logs.go:282] 0 containers: []
	W1217 10:50:16.978189 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:16.978193 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:16.978254 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:17.005223 2974151 cri.go:89] found id: ""
	I1217 10:50:17.005239 2974151 logs.go:282] 0 containers: []
	W1217 10:50:17.005247 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:17.005253 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:17.005336 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:17.031301 2974151 cri.go:89] found id: ""
	I1217 10:50:17.031315 2974151 logs.go:282] 0 containers: []
	W1217 10:50:17.031323 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:17.031328 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:17.031393 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:17.058782 2974151 cri.go:89] found id: ""
	I1217 10:50:17.058796 2974151 logs.go:282] 0 containers: []
	W1217 10:50:17.058804 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:17.058810 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:17.058869 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:17.084580 2974151 cri.go:89] found id: ""
	I1217 10:50:17.084595 2974151 logs.go:282] 0 containers: []
	W1217 10:50:17.084603 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:17.084611 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:17.084628 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:17.144045 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:17.144067 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:17.161459 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:17.161476 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:17.230344 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:17.221052   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.221467   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.224663   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.225044   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.226301   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:17.221052   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.221467   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.224663   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.225044   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:17.226301   16943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:17.230353 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:17.230364 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:17.292978 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:17.292998 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:19.828581 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:19.838853 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:50:19.838914 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:50:19.864198 2974151 cri.go:89] found id: ""
	I1217 10:50:19.864213 2974151 logs.go:282] 0 containers: []
	W1217 10:50:19.864220 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:50:19.864225 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:50:19.864284 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:50:19.899721 2974151 cri.go:89] found id: ""
	I1217 10:50:19.899735 2974151 logs.go:282] 0 containers: []
	W1217 10:50:19.899758 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:50:19.899764 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:50:19.899837 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:50:19.928330 2974151 cri.go:89] found id: ""
	I1217 10:50:19.928345 2974151 logs.go:282] 0 containers: []
	W1217 10:50:19.928352 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:50:19.928356 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:50:19.928445 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:50:19.954497 2974151 cri.go:89] found id: ""
	I1217 10:50:19.954514 2974151 logs.go:282] 0 containers: []
	W1217 10:50:19.954538 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:50:19.954545 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:50:19.954608 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:50:19.980091 2974151 cri.go:89] found id: ""
	I1217 10:50:19.980105 2974151 logs.go:282] 0 containers: []
	W1217 10:50:19.980112 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:50:19.980118 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:50:19.980184 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:50:20.010659 2974151 cri.go:89] found id: ""
	I1217 10:50:20.010676 2974151 logs.go:282] 0 containers: []
	W1217 10:50:20.010685 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:50:20.010691 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:50:20.010767 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:50:20.043088 2974151 cri.go:89] found id: ""
	I1217 10:50:20.043104 2974151 logs.go:282] 0 containers: []
	W1217 10:50:20.043113 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:50:20.043121 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:50:20.043132 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:50:20.100529 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:50:20.100550 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:50:20.118575 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:50:20.118591 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:50:20.187144 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:50:20.178717   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.179517   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.181042   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.181412   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.182990   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:50:20.178717   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.179517   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.181042   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.181412   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:50:20.182990   17053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:50:20.187155 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:50:20.187167 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:50:20.249393 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:50:20.249414 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 10:50:22.778795 2974151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 10:50:22.790536 2974151 kubeadm.go:602] duration metric: took 4m2.042602584s to restartPrimaryControlPlane
	W1217 10:50:22.790601 2974151 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 10:50:22.790675 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 10:50:23.205315 2974151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 10:50:23.219008 2974151 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 10:50:23.227117 2974151 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 10:50:23.227176 2974151 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 10:50:23.235370 2974151 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 10:50:23.235380 2974151 kubeadm.go:158] found existing configuration files:
	
	I1217 10:50:23.235436 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 10:50:23.243539 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 10:50:23.243597 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 10:50:23.251153 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 10:50:23.259288 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 10:50:23.259364 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 10:50:23.267370 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 10:50:23.275727 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 10:50:23.275787 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 10:50:23.283930 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 10:50:23.292280 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 10:50:23.292340 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 10:50:23.300010 2974151 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 10:50:23.340550 2974151 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 10:50:23.340717 2974151 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 10:50:23.412202 2974151 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 10:50:23.412287 2974151 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 10:50:23.412322 2974151 kubeadm.go:319] OS: Linux
	I1217 10:50:23.412377 2974151 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 10:50:23.412441 2974151 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 10:50:23.412489 2974151 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 10:50:23.412536 2974151 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 10:50:23.412585 2974151 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 10:50:23.412632 2974151 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 10:50:23.412677 2974151 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 10:50:23.412724 2974151 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 10:50:23.412769 2974151 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 10:50:23.486890 2974151 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 10:50:23.486989 2974151 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 10:50:23.487074 2974151 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 10:50:23.492949 2974151 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 10:50:23.496478 2974151 out.go:252]   - Generating certificates and keys ...
	I1217 10:50:23.496568 2974151 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 10:50:23.496637 2974151 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 10:50:23.496718 2974151 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 10:50:23.496782 2974151 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 10:50:23.496856 2974151 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 10:50:23.496912 2974151 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 10:50:23.496979 2974151 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 10:50:23.497043 2974151 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 10:50:23.497122 2974151 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 10:50:23.497199 2974151 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 10:50:23.497239 2974151 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 10:50:23.497303 2974151 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 10:50:23.659882 2974151 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 10:50:23.806390 2974151 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 10:50:23.994170 2974151 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 10:50:24.254389 2974151 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 10:50:24.616203 2974151 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 10:50:24.616885 2974151 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 10:50:24.619452 2974151 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 10:50:24.622875 2974151 out.go:252]   - Booting up control plane ...
	I1217 10:50:24.622979 2974151 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 10:50:24.623060 2974151 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 10:50:24.623134 2974151 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 10:50:24.643299 2974151 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 10:50:24.643404 2974151 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 10:50:24.652837 2974151 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 10:50:24.652937 2974151 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 10:50:24.652975 2974151 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 10:50:24.787245 2974151 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 10:50:24.787354 2974151 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 10:54:24.787078 2974151 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000331472s
	I1217 10:54:24.787103 2974151 kubeadm.go:319] 
	I1217 10:54:24.787156 2974151 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 10:54:24.787187 2974151 kubeadm.go:319] 	- The kubelet is not running
	I1217 10:54:24.787285 2974151 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 10:54:24.787290 2974151 kubeadm.go:319] 
	I1217 10:54:24.787387 2974151 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 10:54:24.787416 2974151 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 10:54:24.787445 2974151 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 10:54:24.787448 2974151 kubeadm.go:319] 
	I1217 10:54:24.791515 2974151 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 10:54:24.791934 2974151 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 10:54:24.792041 2974151 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 10:54:24.792274 2974151 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 10:54:24.792279 2974151 kubeadm.go:319] 
	I1217 10:54:24.792347 2974151 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1217 10:54:24.792486 2974151 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000331472s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 10:54:24.792573 2974151 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 10:54:25.209097 2974151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 10:54:25.222902 2974151 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 10:54:25.222960 2974151 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 10:54:25.231173 2974151 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 10:54:25.231182 2974151 kubeadm.go:158] found existing configuration files:
	
	I1217 10:54:25.231234 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 10:54:25.239239 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 10:54:25.239293 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 10:54:25.246851 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 10:54:25.254681 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 10:54:25.254734 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 10:54:25.262252 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 10:54:25.270359 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 10:54:25.270417 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 10:54:25.277936 2974151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 10:54:25.286063 2974151 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 10:54:25.286121 2974151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 10:54:25.293834 2974151 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 10:54:25.333226 2974151 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 10:54:25.333620 2974151 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 10:54:25.403386 2974151 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 10:54:25.403450 2974151 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 10:54:25.403488 2974151 kubeadm.go:319] OS: Linux
	I1217 10:54:25.403533 2974151 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 10:54:25.403579 2974151 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 10:54:25.403625 2974151 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 10:54:25.403672 2974151 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 10:54:25.403719 2974151 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 10:54:25.403765 2974151 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 10:54:25.403809 2974151 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 10:54:25.403855 2974151 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 10:54:25.403900 2974151 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 10:54:25.478252 2974151 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 10:54:25.478355 2974151 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 10:54:25.478445 2974151 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 10:54:25.483628 2974151 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 10:54:25.487136 2974151 out.go:252]   - Generating certificates and keys ...
	I1217 10:54:25.487234 2974151 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 10:54:25.487310 2974151 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 10:54:25.487433 2974151 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 10:54:25.487529 2974151 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 10:54:25.487605 2974151 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 10:54:25.487662 2974151 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 10:54:25.487729 2974151 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 10:54:25.487795 2974151 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 10:54:25.487917 2974151 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 10:54:25.487994 2974151 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 10:54:25.488380 2974151 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 10:54:25.488481 2974151 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 10:54:26.117291 2974151 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 10:54:26.756756 2974151 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 10:54:27.066378 2974151 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 10:54:27.235545 2974151 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 10:54:27.468773 2974151 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 10:54:27.469453 2974151 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 10:54:27.472021 2974151 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 10:54:27.475042 2974151 out.go:252]   - Booting up control plane ...
	I1217 10:54:27.475141 2974151 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 10:54:27.475225 2974151 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 10:54:27.475306 2974151 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 10:54:27.497360 2974151 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 10:54:27.497461 2974151 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 10:54:27.505167 2974151 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 10:54:27.506337 2974151 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 10:54:27.506384 2974151 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 10:54:27.645391 2974151 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 10:54:27.645508 2974151 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 10:58:27.644872 2974151 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000353032s
	I1217 10:58:27.644897 2974151 kubeadm.go:319] 
	I1217 10:58:27.644952 2974151 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 10:58:27.644984 2974151 kubeadm.go:319] 	- The kubelet is not running
	I1217 10:58:27.645087 2974151 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 10:58:27.645092 2974151 kubeadm.go:319] 
	I1217 10:58:27.645195 2974151 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 10:58:27.645226 2974151 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 10:58:27.645255 2974151 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 10:58:27.645258 2974151 kubeadm.go:319] 
	I1217 10:58:27.649050 2974151 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 10:58:27.649524 2974151 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 10:58:27.649634 2974151 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 10:58:27.649875 2974151 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 10:58:27.649881 2974151 kubeadm.go:319] 
	I1217 10:58:27.649949 2974151 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 10:58:27.650003 2974151 kubeadm.go:403] duration metric: took 12m6.936466746s to StartCluster
	I1217 10:58:27.650034 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 10:58:27.650094 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 10:58:27.678841 2974151 cri.go:89] found id: ""
	I1217 10:58:27.678855 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.678862 2974151 logs.go:284] No container was found matching "kube-apiserver"
	I1217 10:58:27.678868 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 10:58:27.678928 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 10:58:27.704494 2974151 cri.go:89] found id: ""
	I1217 10:58:27.704507 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.704514 2974151 logs.go:284] No container was found matching "etcd"
	I1217 10:58:27.704520 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 10:58:27.704578 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 10:58:27.729757 2974151 cri.go:89] found id: ""
	I1217 10:58:27.729770 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.729777 2974151 logs.go:284] No container was found matching "coredns"
	I1217 10:58:27.729783 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 10:58:27.729840 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 10:58:27.757253 2974151 cri.go:89] found id: ""
	I1217 10:58:27.757267 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.757274 2974151 logs.go:284] No container was found matching "kube-scheduler"
	I1217 10:58:27.757284 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 10:58:27.757343 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 10:58:27.781735 2974151 cri.go:89] found id: ""
	I1217 10:58:27.781749 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.781756 2974151 logs.go:284] No container was found matching "kube-proxy"
	I1217 10:58:27.781760 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 10:58:27.781817 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 10:58:27.806628 2974151 cri.go:89] found id: ""
	I1217 10:58:27.806642 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.806649 2974151 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 10:58:27.806655 2974151 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 10:58:27.806713 2974151 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 10:58:27.831983 2974151 cri.go:89] found id: ""
	I1217 10:58:27.831997 2974151 logs.go:282] 0 containers: []
	W1217 10:58:27.832004 2974151 logs.go:284] No container was found matching "kindnet"
	I1217 10:58:27.832013 2974151 logs.go:123] Gathering logs for kubelet ...
	I1217 10:58:27.832023 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 10:58:27.889768 2974151 logs.go:123] Gathering logs for dmesg ...
	I1217 10:58:27.889788 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 10:58:27.906789 2974151 logs.go:123] Gathering logs for describe nodes ...
	I1217 10:58:27.906806 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 10:58:27.971294 2974151 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 10:58:27.963241   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.963807   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.965347   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.965829   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.967335   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 10:58:27.963241   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.963807   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.965347   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.965829   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 10:58:27.967335   20868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 10:58:27.971304 2974151 logs.go:123] Gathering logs for containerd ...
	I1217 10:58:27.971317 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 10:58:28.034286 2974151 logs.go:123] Gathering logs for container status ...
	I1217 10:58:28.034308 2974151 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 10:58:28.076352 2974151 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000353032s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 10:58:28.076384 2974151 out.go:285] * 
	W1217 10:58:28.076460 2974151 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000353032s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 10:58:28.076478 2974151 out.go:285] * 
	W1217 10:58:28.078620 2974151 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 10:58:28.084354 2974151 out.go:203] 
	W1217 10:58:28.086597 2974151 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000353032s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 10:58:28.086645 2974151 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 10:58:28.086668 2974151 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 10:58:28.089656 2974151 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.366987997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367000042Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367054433Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367069325Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367089255Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367101152Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367110883Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367125414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367141668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367171452Z" level=info msg="Connect containerd service"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367467445Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.368062180Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.389242103Z" level=info msg="Start subscribing containerd event"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.389467722Z" level=info msg="Start recovering state"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.389473490Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.390097098Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430326171Z" level=info msg="Start event monitor"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430520850Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430594670Z" level=info msg="Start streaming server"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430655559Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430712788Z" level=info msg="runtime interface starting up..."
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430945234Z" level=info msg="starting plugins..."
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430989147Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.431326009Z" level=info msg="containerd successfully booted in 0.084806s"
	Dec 17 10:46:19 functional-232588 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 11:00:35.536921   22462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 11:00:35.537509   22462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 11:00:35.539091   22462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 11:00:35.539601   22462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 11:00:35.541102   22462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +26.532481] overlayfs: idmapped layers are currently not supported
	[Dec17 09:26] overlayfs: idmapped layers are currently not supported
	[Dec17 09:27] overlayfs: idmapped layers are currently not supported
	[Dec17 09:29] overlayfs: idmapped layers are currently not supported
	[Dec17 09:31] overlayfs: idmapped layers are currently not supported
	[Dec17 09:41] overlayfs: idmapped layers are currently not supported
	[Dec17 09:43] overlayfs: idmapped layers are currently not supported
	[Dec17 09:44] overlayfs: idmapped layers are currently not supported
	[  +5.066669] overlayfs: idmapped layers are currently not supported
	[ +38.827173] overlayfs: idmapped layers are currently not supported
	[Dec17 09:45] overlayfs: idmapped layers are currently not supported
	[Dec17 09:46] overlayfs: idmapped layers are currently not supported
	[Dec17 09:48] overlayfs: idmapped layers are currently not supported
	[  +5.468161] overlayfs: idmapped layers are currently not supported
	[Dec17 09:49] overlayfs: idmapped layers are currently not supported
	[  +4.263444] overlayfs: idmapped layers are currently not supported
	[Dec17 09:50] overlayfs: idmapped layers are currently not supported
	[Dec17 10:07] overlayfs: idmapped layers are currently not supported
	[Dec17 10:08] overlayfs: idmapped layers are currently not supported
	[Dec17 10:10] overlayfs: idmapped layers are currently not supported
	[Dec17 10:11] overlayfs: idmapped layers are currently not supported
	[Dec17 10:13] overlayfs: idmapped layers are currently not supported
	[Dec17 10:15] overlayfs: idmapped layers are currently not supported
	[Dec17 10:16] overlayfs: idmapped layers are currently not supported
	[Dec17 10:21] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 11:00:35 up 16:43,  0 user,  load average: 0.08, 0.18, 0.41
	Linux functional-232588 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 11:00:32 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:00:32 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 17 11:00:32 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:32 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:32 functional-232588 kubelet[22346]: E1217 11:00:32.797641   22346 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:00:32 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:00:32 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:00:33 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 487.
	Dec 17 11:00:33 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:33 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:33 functional-232588 kubelet[22352]: E1217 11:00:33.552030   22352 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:00:33 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:00:33 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:00:34 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 488.
	Dec 17 11:00:34 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:34 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:34 functional-232588 kubelet[22357]: E1217 11:00:34.319689   22357 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:00:34 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:00:34 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:00:34 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 489.
	Dec 17 11:00:34 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:35 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:35 functional-232588 kubelet[22378]: E1217 11:00:35.056718   22378 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:00:35 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:00:35 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588: exit status 2 (382.347034ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-232588" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (2.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (241.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1217 10:58:46.265969 2924574 retry.go:31] will retry after 3.933192663s: Temporary Error: Get "http://10.102.11.119": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1217 10:59:00.200652 2924574 retry.go:31] will retry after 5.007223839s: Temporary Error: Get "http://10.102.11.119": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1217 10:59:15.209117 2924574 retry.go:31] will retry after 4.34775775s: Temporary Error: Get "http://10.102.11.119": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1217 10:59:28.205436 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1217 10:59:29.557416 2924574 retry.go:31] will retry after 9.516944719s: Temporary Error: Get "http://10.102.11.119": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1217 10:59:49.074948 2924574 retry.go:31] will retry after 10.290231677s: Temporary Error: Get "http://10.102.11.119": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1217 11:00:09.366175 2924574 retry.go:31] will retry after 14.456750837s: Temporary Error: Get "http://10.102.11.119": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1217 11:02:31.278369 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588: exit status 2 (318.928225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-232588" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-232588
helpers_test.go:244: (dbg) docker inspect functional-232588:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55",
	        "Created": "2025-12-17T10:31:38.417629873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2962990,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T10:31:38.484538313Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/hostname",
	        "HostsPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/hosts",
	        "LogPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55-json.log",
	        "Name": "/functional-232588",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-232588:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-232588",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55",
	                "LowerDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813-init/diff:/var/lib/docker/overlay2/aa1c3cb837db05afa9c265c464cc269fa9c11658f422c1c8858e1287ac952f12/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-232588",
	                "Source": "/var/lib/docker/volumes/functional-232588/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-232588",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-232588",
	                "name.minikube.sigs.k8s.io": "functional-232588",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cf91fdea6bf1c59282af014fad74b29d2456698ebca9b6be8c9685054b7d7df4",
	            "SandboxKey": "/var/run/docker/netns/cf91fdea6bf1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35733"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35734"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35737"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35735"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35736"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-232588": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:06:f9:5f:98:3e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf7a63e100f8f97f0cb760b53960a5eaeb1f5054bead79442486fc1d51c01ab7",
	                    "EndpointID": "a3f4d6de946fb68269c7790dce129934f895a840ec5cebbe87fc0d49cb575c44",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-232588",
	                        "f67a3fa8da99"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-232588 -n functional-232588
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232588 -n functional-232588: exit status 2 (325.578345ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-232588 image load --daemon kicbase/echo-server:functional-232588 --alsologtostderr                                                                   │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ image          │ functional-232588 image ls                                                                                                                                      │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ image          │ functional-232588 image save kicbase/echo-server:functional-232588 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ image          │ functional-232588 image rm kicbase/echo-server:functional-232588 --alsologtostderr                                                                              │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ image          │ functional-232588 image ls                                                                                                                                      │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ image          │ functional-232588 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:01 UTC │
	│ image          │ functional-232588 image ls                                                                                                                                      │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:01 UTC │ 17 Dec 25 11:01 UTC │
	│ image          │ functional-232588 image save --daemon kicbase/echo-server:functional-232588 --alsologtostderr                                                                   │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:01 UTC │ 17 Dec 25 11:01 UTC │
	│ ssh            │ functional-232588 ssh sudo cat /etc/test/nested/copy/2924574/hosts                                                                                              │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:01 UTC │ 17 Dec 25 11:01 UTC │
	│ ssh            │ functional-232588 ssh sudo cat /etc/ssl/certs/2924574.pem                                                                                                       │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:01 UTC │ 17 Dec 25 11:01 UTC │
	│ ssh            │ functional-232588 ssh sudo cat /usr/share/ca-certificates/2924574.pem                                                                                           │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:01 UTC │ 17 Dec 25 11:01 UTC │
	│ ssh            │ functional-232588 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:01 UTC │ 17 Dec 25 11:01 UTC │
	│ ssh            │ functional-232588 ssh sudo cat /etc/ssl/certs/29245742.pem                                                                                                      │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:01 UTC │ 17 Dec 25 11:01 UTC │
	│ ssh            │ functional-232588 ssh sudo cat /usr/share/ca-certificates/29245742.pem                                                                                          │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:01 UTC │ 17 Dec 25 11:01 UTC │
	│ ssh            │ functional-232588 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:01 UTC │ 17 Dec 25 11:01 UTC │
	│ image          │ functional-232588 image ls --format short --alsologtostderr                                                                                                     │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:01 UTC │ 17 Dec 25 11:01 UTC │
	│ image          │ functional-232588 image ls --format yaml --alsologtostderr                                                                                                      │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:01 UTC │ 17 Dec 25 11:01 UTC │
	│ ssh            │ functional-232588 ssh pgrep buildkitd                                                                                                                           │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:01 UTC │                     │
	│ image          │ functional-232588 image build -t localhost/my-image:functional-232588 testdata/build --alsologtostderr                                                          │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:01 UTC │ 17 Dec 25 11:01 UTC │
	│ image          │ functional-232588 image ls                                                                                                                                      │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:01 UTC │ 17 Dec 25 11:01 UTC │
	│ image          │ functional-232588 image ls --format json --alsologtostderr                                                                                                      │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:01 UTC │ 17 Dec 25 11:01 UTC │
	│ image          │ functional-232588 image ls --format table --alsologtostderr                                                                                                     │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:01 UTC │ 17 Dec 25 11:01 UTC │
	│ update-context │ functional-232588 update-context --alsologtostderr -v=2                                                                                                         │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:01 UTC │ 17 Dec 25 11:01 UTC │
	│ update-context │ functional-232588 update-context --alsologtostderr -v=2                                                                                                         │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:01 UTC │ 17 Dec 25 11:01 UTC │
	│ update-context │ functional-232588 update-context --alsologtostderr -v=2                                                                                                         │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:01 UTC │ 17 Dec 25 11:01 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:00:51
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:00:51.150282 2991469 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:00:51.150465 2991469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:00:51.150489 2991469 out.go:374] Setting ErrFile to fd 2...
	I1217 11:00:51.150518 2991469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:00:51.150824 2991469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 11:00:51.151249 2991469 out.go:368] Setting JSON to false
	I1217 11:00:51.152238 2991469 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":60202,"bootTime":1765909050,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 11:00:51.152357 2991469 start.go:143] virtualization:  
	I1217 11:00:51.155844 2991469 out.go:179] * [functional-232588] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 11:00:51.158983 2991469 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 11:00:51.159071 2991469 notify.go:221] Checking for updates...
	I1217 11:00:51.165084 2991469 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:00:51.167975 2991469 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 11:00:51.170864 2991469 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 11:00:51.173758 2991469 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 11:00:51.176681 2991469 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:00:51.180035 2991469 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:00:51.180790 2991469 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:00:51.212640 2991469 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 11:00:51.212760 2991469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:00:51.268928 2991469 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:00:51.260149135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:00:51.269027 2991469 docker.go:319] overlay module found
	I1217 11:00:51.272025 2991469 out.go:179] * Using the docker driver based on existing profile
	I1217 11:00:51.274918 2991469 start.go:309] selected driver: docker
	I1217 11:00:51.274941 2991469 start.go:927] validating driver "docker" against &{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:00:51.275036 2991469 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:00:51.275164 2991469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:00:51.332206 2991469 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:00:51.323192574 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:00:51.332741 2991469 cni.go:84] Creating CNI manager for ""
	I1217 11:00:51.332806 2991469 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 11:00:51.332850 2991469 start.go:353] cluster config:
	{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:00:51.335859 2991469 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 11:00:57 functional-232588 containerd[9704]: time="2025-12-17T11:00:57.505404406Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:00:57 functional-232588 containerd[9704]: time="2025-12-17T11:00:57.505948549Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-232588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:00:58 functional-232588 containerd[9704]: time="2025-12-17T11:00:58.562944463Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-232588\""
	Dec 17 11:00:58 functional-232588 containerd[9704]: time="2025-12-17T11:00:58.565595730Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-232588\""
	Dec 17 11:00:58 functional-232588 containerd[9704]: time="2025-12-17T11:00:58.568529804Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 17 11:00:58 functional-232588 containerd[9704]: time="2025-12-17T11:00:58.576370121Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-232588\" returns successfully"
	Dec 17 11:00:58 functional-232588 containerd[9704]: time="2025-12-17T11:00:58.801305982Z" level=info msg="No images store for sha256:dcc6c1299012d27018afa1129297bcd7c1383e3bac6b45e08cf564807c6e9825"
	Dec 17 11:00:58 functional-232588 containerd[9704]: time="2025-12-17T11:00:58.803588971Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-232588\""
	Dec 17 11:00:58 functional-232588 containerd[9704]: time="2025-12-17T11:00:58.811530143Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:00:58 functional-232588 containerd[9704]: time="2025-12-17T11:00:58.812205753Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-232588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:00:59 functional-232588 containerd[9704]: time="2025-12-17T11:00:59.583788901Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-232588\""
	Dec 17 11:00:59 functional-232588 containerd[9704]: time="2025-12-17T11:00:59.586204047Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-232588\""
	Dec 17 11:00:59 functional-232588 containerd[9704]: time="2025-12-17T11:00:59.588176044Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 17 11:00:59 functional-232588 containerd[9704]: time="2025-12-17T11:00:59.599145302Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-232588\" returns successfully"
	Dec 17 11:01:00 functional-232588 containerd[9704]: time="2025-12-17T11:01:00.558419308Z" level=info msg="No images store for sha256:7be5c60c86dbf911e3bb52728bec20a7b310998884ea7395e49801fdc1a937ef"
	Dec 17 11:01:00 functional-232588 containerd[9704]: time="2025-12-17T11:01:00.564071576Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-232588\""
	Dec 17 11:01:00 functional-232588 containerd[9704]: time="2025-12-17T11:01:00.571642812Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:01:00 functional-232588 containerd[9704]: time="2025-12-17T11:01:00.572007587Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-232588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:01:07 functional-232588 containerd[9704]: time="2025-12-17T11:01:07.004690220Z" level=info msg="connecting to shim 59sr3c8u82hqsrw7i6ni5ikzx" address="unix:///run/containerd/s/53995951c9c6cfd748c9009e4f4b4a88302661b68d2630f8aa5aa39e58a1da63" namespace=k8s.io protocol=ttrpc version=3
	Dec 17 11:01:07 functional-232588 containerd[9704]: time="2025-12-17T11:01:07.084089036Z" level=info msg="shim disconnected" id=59sr3c8u82hqsrw7i6ni5ikzx namespace=k8s.io
	Dec 17 11:01:07 functional-232588 containerd[9704]: time="2025-12-17T11:01:07.084133113Z" level=info msg="cleaning up after shim disconnected" id=59sr3c8u82hqsrw7i6ni5ikzx namespace=k8s.io
	Dec 17 11:01:07 functional-232588 containerd[9704]: time="2025-12-17T11:01:07.084144715Z" level=info msg="cleaning up dead shim" id=59sr3c8u82hqsrw7i6ni5ikzx namespace=k8s.io
	Dec 17 11:01:07 functional-232588 containerd[9704]: time="2025-12-17T11:01:07.363480383Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-232588\""
	Dec 17 11:01:07 functional-232588 containerd[9704]: time="2025-12-17T11:01:07.371753289Z" level=info msg="ImageCreate event name:\"sha256:edb978592cc0fce3202df5ee58e082dd4c5400df5379a353440662a2925962e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:01:07 functional-232588 containerd[9704]: time="2025-12-17T11:01:07.372289062Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-232588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 11:02:37.910500   25079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 11:02:37.911214   25079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 11:02:37.913186   25079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 11:02:37.913775   25079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 11:02:37.915606   25079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +26.532481] overlayfs: idmapped layers are currently not supported
	[Dec17 09:26] overlayfs: idmapped layers are currently not supported
	[Dec17 09:27] overlayfs: idmapped layers are currently not supported
	[Dec17 09:29] overlayfs: idmapped layers are currently not supported
	[Dec17 09:31] overlayfs: idmapped layers are currently not supported
	[Dec17 09:41] overlayfs: idmapped layers are currently not supported
	[Dec17 09:43] overlayfs: idmapped layers are currently not supported
	[Dec17 09:44] overlayfs: idmapped layers are currently not supported
	[  +5.066669] overlayfs: idmapped layers are currently not supported
	[ +38.827173] overlayfs: idmapped layers are currently not supported
	[Dec17 09:45] overlayfs: idmapped layers are currently not supported
	[Dec17 09:46] overlayfs: idmapped layers are currently not supported
	[Dec17 09:48] overlayfs: idmapped layers are currently not supported
	[  +5.468161] overlayfs: idmapped layers are currently not supported
	[Dec17 09:49] overlayfs: idmapped layers are currently not supported
	[  +4.263444] overlayfs: idmapped layers are currently not supported
	[Dec17 09:50] overlayfs: idmapped layers are currently not supported
	[Dec17 10:07] overlayfs: idmapped layers are currently not supported
	[Dec17 10:08] overlayfs: idmapped layers are currently not supported
	[Dec17 10:10] overlayfs: idmapped layers are currently not supported
	[Dec17 10:11] overlayfs: idmapped layers are currently not supported
	[Dec17 10:13] overlayfs: idmapped layers are currently not supported
	[Dec17 10:15] overlayfs: idmapped layers are currently not supported
	[Dec17 10:16] overlayfs: idmapped layers are currently not supported
	[Dec17 10:21] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 11:02:37 up 16:45,  0 user,  load average: 0.60, 0.51, 0.52
	Linux functional-232588 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 11:02:34 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:02:34 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 649.
	Dec 17 11:02:34 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:02:35 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:02:35 functional-232588 kubelet[24946]: E1217 11:02:35.057194   24946 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:02:35 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:02:35 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:02:35 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 650.
	Dec 17 11:02:35 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:02:35 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:02:35 functional-232588 kubelet[24952]: E1217 11:02:35.796842   24952 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:02:35 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:02:35 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:02:36 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 651.
	Dec 17 11:02:36 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:02:36 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:02:36 functional-232588 kubelet[24958]: E1217 11:02:36.544005   24958 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:02:36 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:02:36 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:02:37 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 652.
	Dec 17 11:02:37 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:02:37 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:02:37 functional-232588 kubelet[24989]: E1217 11:02:37.314446   24989 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:02:37 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:02:37 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588: exit status 2 (321.262012ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-232588" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (241.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (1.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-232588 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-232588 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (69.504738ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-232588 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-232588
helpers_test.go:244: (dbg) docker inspect functional-232588:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55",
	        "Created": "2025-12-17T10:31:38.417629873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2962990,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T10:31:38.484538313Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/hostname",
	        "HostsPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/hosts",
	        "LogPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55-json.log",
	        "Name": "/functional-232588",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-232588:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-232588",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55",
	                "LowerDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813-init/diff:/var/lib/docker/overlay2/aa1c3cb837db05afa9c265c464cc269fa9c11658f422c1c8858e1287ac952f12/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-232588",
	                "Source": "/var/lib/docker/volumes/functional-232588/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-232588",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-232588",
	                "name.minikube.sigs.k8s.io": "functional-232588",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cf91fdea6bf1c59282af014fad74b29d2456698ebca9b6be8c9685054b7d7df4",
	            "SandboxKey": "/var/run/docker/netns/cf91fdea6bf1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35733"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35734"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35737"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35735"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35736"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-232588": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:06:f9:5f:98:3e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf7a63e100f8f97f0cb760b53960a5eaeb1f5054bead79442486fc1d51c01ab7",
	                    "EndpointID": "a3f4d6de946fb68269c7790dce129934f895a840ec5cebbe87fc0d49cb575c44",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-232588",
	                        "f67a3fa8da99"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-232588 -n functional-232588
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232588 -n functional-232588: exit status 2 (323.118912ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-232588 service hello-node --url --format={{.IP}}                                                                                        │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ service   │ functional-232588 service hello-node --url                                                                                                         │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ mount     │ -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun345243528/001:/mount-9p --alsologtostderr -v=1              │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ ssh       │ functional-232588 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ ssh       │ functional-232588 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ ssh       │ functional-232588 ssh -- ls -la /mount-9p                                                                                                          │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ ssh       │ functional-232588 ssh cat /mount-9p/test-1765969241578179780                                                                                       │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ ssh       │ functional-232588 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                   │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ ssh       │ functional-232588 ssh sudo umount -f /mount-9p                                                                                                     │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ mount     │ -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun828762534/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ ssh       │ functional-232588 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ ssh       │ functional-232588 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ ssh       │ functional-232588 ssh -- ls -la /mount-9p                                                                                                          │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ ssh       │ functional-232588 ssh sudo umount -f /mount-9p                                                                                                     │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ mount     │ -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3882644013/001:/mount1 --alsologtostderr -v=1               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ ssh       │ functional-232588 ssh findmnt -T /mount1                                                                                                           │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ mount     │ -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3882644013/001:/mount2 --alsologtostderr -v=1               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ mount     │ -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3882644013/001:/mount3 --alsologtostderr -v=1               │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ ssh       │ functional-232588 ssh findmnt -T /mount2                                                                                                           │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ ssh       │ functional-232588 ssh findmnt -T /mount3                                                                                                           │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │ 17 Dec 25 11:00 UTC │
	│ mount     │ -p functional-232588 --kill=true                                                                                                                   │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ start     │ -p functional-232588 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1  │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ start     │ -p functional-232588 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1  │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ start     │ -p functional-232588 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1            │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-232588 --alsologtostderr -v=1                                                                                     │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 11:00 UTC │                     │
	└───────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:00:51
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:00:51.150282 2991469 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:00:51.150465 2991469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:00:51.150489 2991469 out.go:374] Setting ErrFile to fd 2...
	I1217 11:00:51.150518 2991469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:00:51.150824 2991469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 11:00:51.151249 2991469 out.go:368] Setting JSON to false
	I1217 11:00:51.152238 2991469 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":60202,"bootTime":1765909050,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 11:00:51.152357 2991469 start.go:143] virtualization:  
	I1217 11:00:51.155844 2991469 out.go:179] * [functional-232588] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 11:00:51.158983 2991469 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 11:00:51.159071 2991469 notify.go:221] Checking for updates...
	I1217 11:00:51.165084 2991469 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:00:51.167975 2991469 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 11:00:51.170864 2991469 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 11:00:51.173758 2991469 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 11:00:51.176681 2991469 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:00:51.180035 2991469 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:00:51.180790 2991469 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:00:51.212640 2991469 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 11:00:51.212760 2991469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:00:51.268928 2991469 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:00:51.260149135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:00:51.269027 2991469 docker.go:319] overlay module found
	I1217 11:00:51.272025 2991469 out.go:179] * Using the docker driver based on existing profile
	I1217 11:00:51.274918 2991469 start.go:309] selected driver: docker
	I1217 11:00:51.274941 2991469 start.go:927] validating driver "docker" against &{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:00:51.275036 2991469 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:00:51.275164 2991469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:00:51.332206 2991469 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:00:51.323192574 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:00:51.332741 2991469 cni.go:84] Creating CNI manager for ""
	I1217 11:00:51.332806 2991469 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 11:00:51.332850 2991469 start.go:353] cluster config:
	{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:00:51.335859 2991469 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.366987997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367000042Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367054433Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367069325Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367089255Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367101152Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367110883Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367125414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367141668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367171452Z" level=info msg="Connect containerd service"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.367467445Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.368062180Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.389242103Z" level=info msg="Start subscribing containerd event"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.389467722Z" level=info msg="Start recovering state"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.389473490Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.390097098Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430326171Z" level=info msg="Start event monitor"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430520850Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430594670Z" level=info msg="Start streaming server"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430655559Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430712788Z" level=info msg="runtime interface starting up..."
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430945234Z" level=info msg="starting plugins..."
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.430989147Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 10:46:19 functional-232588 containerd[9704]: time="2025-12-17T10:46:19.431326009Z" level=info msg="containerd successfully booted in 0.084806s"
	Dec 17 10:46:19 functional-232588 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 11:00:54.087614   23463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 11:00:54.088239   23463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 11:00:54.089790   23463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 11:00:54.090362   23463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 11:00:54.091916   23463 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +26.532481] overlayfs: idmapped layers are currently not supported
	[Dec17 09:26] overlayfs: idmapped layers are currently not supported
	[Dec17 09:27] overlayfs: idmapped layers are currently not supported
	[Dec17 09:29] overlayfs: idmapped layers are currently not supported
	[Dec17 09:31] overlayfs: idmapped layers are currently not supported
	[Dec17 09:41] overlayfs: idmapped layers are currently not supported
	[Dec17 09:43] overlayfs: idmapped layers are currently not supported
	[Dec17 09:44] overlayfs: idmapped layers are currently not supported
	[  +5.066669] overlayfs: idmapped layers are currently not supported
	[ +38.827173] overlayfs: idmapped layers are currently not supported
	[Dec17 09:45] overlayfs: idmapped layers are currently not supported
	[Dec17 09:46] overlayfs: idmapped layers are currently not supported
	[Dec17 09:48] overlayfs: idmapped layers are currently not supported
	[  +5.468161] overlayfs: idmapped layers are currently not supported
	[Dec17 09:49] overlayfs: idmapped layers are currently not supported
	[  +4.263444] overlayfs: idmapped layers are currently not supported
	[Dec17 09:50] overlayfs: idmapped layers are currently not supported
	[Dec17 10:07] overlayfs: idmapped layers are currently not supported
	[Dec17 10:08] overlayfs: idmapped layers are currently not supported
	[Dec17 10:10] overlayfs: idmapped layers are currently not supported
	[Dec17 10:11] overlayfs: idmapped layers are currently not supported
	[Dec17 10:13] overlayfs: idmapped layers are currently not supported
	[Dec17 10:15] overlayfs: idmapped layers are currently not supported
	[Dec17 10:16] overlayfs: idmapped layers are currently not supported
	[Dec17 10:21] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 11:00:54 up 16:43,  0 user,  load average: 1.01, 0.40, 0.48
	Linux functional-232588 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 11:00:50 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:00:51 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 511.
	Dec 17 11:00:51 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:51 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:51 functional-232588 kubelet[23217]: E1217 11:00:51.556948   23217 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:00:51 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:00:51 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:00:52 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 512.
	Dec 17 11:00:52 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:52 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:52 functional-232588 kubelet[23252]: E1217 11:00:52.312063   23252 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:00:52 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:00:52 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:00:52 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 513.
	Dec 17 11:00:52 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:52 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:53 functional-232588 kubelet[23357]: E1217 11:00:53.018543   23357 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:00:53 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:00:53 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:00:53 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 514.
	Dec 17 11:00:53 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:53 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:00:53 functional-232588 kubelet[23389]: E1217 11:00:53.836201   23389 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:00:53 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:00:53 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588: exit status 2 (302.061409ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-232588" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (1.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-232588 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-232588 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1217 10:58:35.690423 2987105 out.go:360] Setting OutFile to fd 1 ...
I1217 10:58:35.692622 2987105 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 10:58:35.692675 2987105 out.go:374] Setting ErrFile to fd 2...
I1217 10:58:35.692700 2987105 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 10:58:35.693054 2987105 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
I1217 10:58:35.693450 2987105 mustload.go:66] Loading cluster: functional-232588
I1217 10:58:35.694470 2987105 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1217 10:58:35.695180 2987105 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
I1217 10:58:35.743565 2987105 host.go:66] Checking if "functional-232588" exists ...
I1217 10:58:35.744027 2987105 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1217 10:58:35.854315 2987105 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 10:58:35.843543684 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1217 10:58:35.854446 2987105 api_server.go:166] Checking apiserver status ...
I1217 10:58:35.854511 2987105 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1217 10:58:35.854576 2987105 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
I1217 10:58:35.908328 2987105 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
W1217 10:58:36.023512 2987105 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1217 10:58:36.027115 2987105 out.go:179] * The control-plane node functional-232588 apiserver is not running: (state=Stopped)
I1217 10:58:36.029977 2987105 out.go:179]   To start a cluster, run: "minikube start -p functional-232588"

                                                
                                                
stdout: * The control-plane node functional-232588 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-232588"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-232588 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 2987106: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-232588 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-232588 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-232588 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-232588 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-232588 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-232588 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-232588 apply -f testdata/testsvc.yaml: exit status 1 (112.964391ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-232588 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (117.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.102.11.119": Temporary Error: Get "http://10.102.11.119": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-232588 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-232588 get svc nginx-svc: exit status 1 (61.92509ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-232588 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (117.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-232588 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-232588 create deployment hello-node --image kicbase/echo-server: exit status 1 (56.021538ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-232588 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232588 service list: exit status 103 (255.500981ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-232588 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-232588"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-232588 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-232588 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-232588\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232588 service list -o json: exit status 103 (271.854197ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-232588 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-232588"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-232588 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232588 service --namespace=default --https --url hello-node: exit status 103 (269.015547ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-232588 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-232588"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-232588 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232588 service hello-node --url --format={{.IP}}: exit status 103 (269.867974ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-232588 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-232588"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-232588 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-232588 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-232588\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232588 service hello-node --url: exit status 103 (251.788498ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-232588 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-232588"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-232588 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-232588 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-232588"
functional_test.go:1579: failed to parse "* The control-plane node functional-232588 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-232588\"": parse "* The control-plane node functional-232588 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-232588\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (2.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun345243528/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765969241578179780" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun345243528/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765969241578179780" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun345243528/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765969241578179780" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun345243528/001/test-1765969241578179780
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232588 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (374.488217ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 11:00:41.952988 2924574 retry.go:31] will retry after 657.086567ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh -- ls -la /mount-9p
E1217 11:00:43.082704 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 11:00 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 11:00 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 11:00 test-1765969241578179780
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh cat /mount-9p/test-1765969241578179780
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-232588 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-232588 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (57.88921ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-232588 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232588 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (262.612342ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=39913)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 17 11:00 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 17 11:00 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 17 11:00 test-1765969241578179780
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-232588 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun345243528/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun345243528/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun345243528/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:39913
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun345243528/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun345243528/001:/mount-9p --alsologtostderr -v=1] stderr:
I1217 11:00:41.635022 2989566 out.go:360] Setting OutFile to fd 1 ...
I1217 11:00:41.635189 2989566 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:00:41.635201 2989566 out.go:374] Setting ErrFile to fd 2...
I1217 11:00:41.635206 2989566 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:00:41.635541 2989566 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
I1217 11:00:41.635855 2989566 mustload.go:66] Loading cluster: functional-232588
I1217 11:00:41.636663 2989566 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1217 11:00:41.637219 2989566 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
I1217 11:00:41.659592 2989566 host.go:66] Checking if "functional-232588" exists ...
I1217 11:00:41.659911 2989566 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1217 11:00:41.761479 2989566 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:00:41.746945473 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1217 11:00:41.761634 2989566 cli_runner.go:164] Run: docker network inspect functional-232588 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1217 11:00:41.806422 2989566 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun345243528/001 into VM as /mount-9p ...
I1217 11:00:41.809707 2989566 out.go:179]   - Mount type:   9p
I1217 11:00:41.812679 2989566 out.go:179]   - User ID:      docker
I1217 11:00:41.815618 2989566 out.go:179]   - Group ID:     docker
I1217 11:00:41.819798 2989566 out.go:179]   - Version:      9p2000.L
I1217 11:00:41.825799 2989566 out.go:179]   - Message Size: 262144
I1217 11:00:41.830589 2989566 out.go:179]   - Options:      map[]
I1217 11:00:41.833637 2989566 out.go:179]   - Bind Address: 192.168.49.1:39913
I1217 11:00:41.839236 2989566 out.go:179] * Userspace file server: 
I1217 11:00:41.841862 2989566 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1217 11:00:41.842218 2989566 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
I1217 11:00:41.862099 2989566 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
I1217 11:00:41.958745 2989566 mount.go:180] unmount for /mount-9p ran successfully
I1217 11:00:41.958775 2989566 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1217 11:00:41.966960 2989566 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=39913,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1217 11:00:41.977218 2989566 main.go:127] stdlog: ufs.go:141 connected
I1217 11:00:41.977402 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tversion tag 65535 msize 262144 version '9P2000.L'
I1217 11:00:41.977450 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rversion tag 65535 msize 262144 version '9P2000'
I1217 11:00:41.977665 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1217 11:00:41.977739 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rattach tag 0 aqid (c9e036 2bf825e7 'd')
I1217 11:00:41.978411 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tstat tag 0 fid 0
I1217 11:00:41.978476 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9e036 2bf825e7 'd') m d775 at 0 mt 1765969241 l 4096 t 0 d 0 ext )
I1217 11:00:41.981793 2989566 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/.mount-process: {Name:mk7105ab641dadbb6350b898949bc9b69ccefab5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 11:00:41.982030 2989566 mount.go:105] mount successful: ""
I1217 11:00:41.985495 2989566 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun345243528/001 to /mount-9p
I1217 11:00:41.988297 2989566 out.go:203] 
I1217 11:00:41.991115 2989566 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1217 11:00:43.137157 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tstat tag 0 fid 0
I1217 11:00:43.137239 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9e036 2bf825e7 'd') m d775 at 0 mt 1765969241 l 4096 t 0 d 0 ext )
I1217 11:00:43.137655 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Twalk tag 0 fid 0 newfid 1 
I1217 11:00:43.137706 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rwalk tag 0 
I1217 11:00:43.137869 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Topen tag 0 fid 1 mode 0
I1217 11:00:43.137932 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Ropen tag 0 qid (c9e036 2bf825e7 'd') iounit 0
I1217 11:00:43.138085 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tstat tag 0 fid 0
I1217 11:00:43.138129 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9e036 2bf825e7 'd') m d775 at 0 mt 1765969241 l 4096 t 0 d 0 ext )
I1217 11:00:43.138312 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tread tag 0 fid 1 offset 0 count 262120
I1217 11:00:43.138429 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rread tag 0 count 258
I1217 11:00:43.138574 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tread tag 0 fid 1 offset 258 count 261862
I1217 11:00:43.138609 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rread tag 0 count 0
I1217 11:00:43.138750 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tread tag 0 fid 1 offset 258 count 262120
I1217 11:00:43.138782 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rread tag 0 count 0
I1217 11:00:43.138943 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1217 11:00:43.139008 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rwalk tag 0 (c9e037 2bf825e7 '') 
I1217 11:00:43.139125 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tstat tag 0 fid 2
I1217 11:00:43.139166 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9e037 2bf825e7 '') m 644 at 0 mt 1765969241 l 24 t 0 d 0 ext )
I1217 11:00:43.139322 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tstat tag 0 fid 2
I1217 11:00:43.139360 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9e037 2bf825e7 '') m 644 at 0 mt 1765969241 l 24 t 0 d 0 ext )
I1217 11:00:43.139521 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tclunk tag 0 fid 2
I1217 11:00:43.139563 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rclunk tag 0
I1217 11:00:43.139699 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Twalk tag 0 fid 0 newfid 2 0:'test-1765969241578179780' 
I1217 11:00:43.139740 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rwalk tag 0 (c9e039 2bf825e7 '') 
I1217 11:00:43.139876 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tstat tag 0 fid 2
I1217 11:00:43.139915 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rstat tag 0 st ('test-1765969241578179780' 'jenkins' 'jenkins' '' q (c9e039 2bf825e7 '') m 644 at 0 mt 1765969241 l 24 t 0 d 0 ext )
I1217 11:00:43.140055 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tstat tag 0 fid 2
I1217 11:00:43.140089 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rstat tag 0 st ('test-1765969241578179780' 'jenkins' 'jenkins' '' q (c9e039 2bf825e7 '') m 644 at 0 mt 1765969241 l 24 t 0 d 0 ext )
I1217 11:00:43.140209 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tclunk tag 0 fid 2
I1217 11:00:43.140236 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rclunk tag 0
I1217 11:00:43.140439 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1217 11:00:43.140480 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rwalk tag 0 (c9e038 2bf825e7 '') 
I1217 11:00:43.140799 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tstat tag 0 fid 2
I1217 11:00:43.140848 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9e038 2bf825e7 '') m 644 at 0 mt 1765969241 l 24 t 0 d 0 ext )
I1217 11:00:43.140995 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tstat tag 0 fid 2
I1217 11:00:43.141035 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9e038 2bf825e7 '') m 644 at 0 mt 1765969241 l 24 t 0 d 0 ext )
I1217 11:00:43.141154 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tclunk tag 0 fid 2
I1217 11:00:43.141179 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rclunk tag 0
I1217 11:00:43.141407 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tread tag 0 fid 1 offset 258 count 262120
I1217 11:00:43.141443 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rread tag 0 count 0
I1217 11:00:43.141594 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tclunk tag 0 fid 1
I1217 11:00:43.141630 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rclunk tag 0
I1217 11:00:43.432635 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Twalk tag 0 fid 0 newfid 1 0:'test-1765969241578179780' 
I1217 11:00:43.432713 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rwalk tag 0 (c9e039 2bf825e7 '') 
I1217 11:00:43.432973 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tstat tag 0 fid 1
I1217 11:00:43.433040 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rstat tag 0 st ('test-1765969241578179780' 'jenkins' 'jenkins' '' q (c9e039 2bf825e7 '') m 644 at 0 mt 1765969241 l 24 t 0 d 0 ext )
I1217 11:00:43.433212 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Twalk tag 0 fid 1 newfid 2 
I1217 11:00:43.433250 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rwalk tag 0 
I1217 11:00:43.433385 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Topen tag 0 fid 2 mode 0
I1217 11:00:43.433439 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Ropen tag 0 qid (c9e039 2bf825e7 '') iounit 0
I1217 11:00:43.433586 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tstat tag 0 fid 1
I1217 11:00:43.433654 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rstat tag 0 st ('test-1765969241578179780' 'jenkins' 'jenkins' '' q (c9e039 2bf825e7 '') m 644 at 0 mt 1765969241 l 24 t 0 d 0 ext )
I1217 11:00:43.433807 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tread tag 0 fid 2 offset 0 count 262120
I1217 11:00:43.433868 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rread tag 0 count 24
I1217 11:00:43.434023 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tread tag 0 fid 2 offset 24 count 262120
I1217 11:00:43.434055 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rread tag 0 count 0
I1217 11:00:43.434200 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tread tag 0 fid 2 offset 24 count 262120
I1217 11:00:43.434244 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rread tag 0 count 0
I1217 11:00:43.434519 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tclunk tag 0 fid 2
I1217 11:00:43.434561 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rclunk tag 0
I1217 11:00:43.434714 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tclunk tag 0 fid 1
I1217 11:00:43.434751 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rclunk tag 0
I1217 11:00:43.756660 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tstat tag 0 fid 0
I1217 11:00:43.756741 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9e036 2bf825e7 'd') m d775 at 0 mt 1765969241 l 4096 t 0 d 0 ext )
I1217 11:00:43.757157 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Twalk tag 0 fid 0 newfid 1 
I1217 11:00:43.757207 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rwalk tag 0 
I1217 11:00:43.757390 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Topen tag 0 fid 1 mode 0
I1217 11:00:43.757488 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Ropen tag 0 qid (c9e036 2bf825e7 'd') iounit 0
I1217 11:00:43.757674 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tstat tag 0 fid 0
I1217 11:00:43.757720 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9e036 2bf825e7 'd') m d775 at 0 mt 1765969241 l 4096 t 0 d 0 ext )
I1217 11:00:43.757901 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tread tag 0 fid 1 offset 0 count 262120
I1217 11:00:43.758030 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rread tag 0 count 258
I1217 11:00:43.758173 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tread tag 0 fid 1 offset 258 count 261862
I1217 11:00:43.758205 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rread tag 0 count 0
I1217 11:00:43.758368 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tread tag 0 fid 1 offset 258 count 262120
I1217 11:00:43.758415 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rread tag 0 count 0
I1217 11:00:43.758595 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1217 11:00:43.758647 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rwalk tag 0 (c9e037 2bf825e7 '') 
I1217 11:00:43.758770 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tstat tag 0 fid 2
I1217 11:00:43.758806 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9e037 2bf825e7 '') m 644 at 0 mt 1765969241 l 24 t 0 d 0 ext )
I1217 11:00:43.758943 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tstat tag 0 fid 2
I1217 11:00:43.758983 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9e037 2bf825e7 '') m 644 at 0 mt 1765969241 l 24 t 0 d 0 ext )
I1217 11:00:43.759106 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tclunk tag 0 fid 2
I1217 11:00:43.759132 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rclunk tag 0
I1217 11:00:43.759267 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Twalk tag 0 fid 0 newfid 2 0:'test-1765969241578179780' 
I1217 11:00:43.759300 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rwalk tag 0 (c9e039 2bf825e7 '') 
I1217 11:00:43.759422 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tstat tag 0 fid 2
I1217 11:00:43.759454 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rstat tag 0 st ('test-1765969241578179780' 'jenkins' 'jenkins' '' q (c9e039 2bf825e7 '') m 644 at 0 mt 1765969241 l 24 t 0 d 0 ext )
I1217 11:00:43.759619 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tstat tag 0 fid 2
I1217 11:00:43.759705 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rstat tag 0 st ('test-1765969241578179780' 'jenkins' 'jenkins' '' q (c9e039 2bf825e7 '') m 644 at 0 mt 1765969241 l 24 t 0 d 0 ext )
I1217 11:00:43.759889 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tclunk tag 0 fid 2
I1217 11:00:43.759929 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rclunk tag 0
I1217 11:00:43.760074 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1217 11:00:43.760114 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rwalk tag 0 (c9e038 2bf825e7 '') 
I1217 11:00:43.760221 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tstat tag 0 fid 2
I1217 11:00:43.760256 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9e038 2bf825e7 '') m 644 at 0 mt 1765969241 l 24 t 0 d 0 ext )
I1217 11:00:43.760385 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tstat tag 0 fid 2
I1217 11:00:43.760450 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9e038 2bf825e7 '') m 644 at 0 mt 1765969241 l 24 t 0 d 0 ext )
I1217 11:00:43.760580 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tclunk tag 0 fid 2
I1217 11:00:43.760603 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rclunk tag 0
I1217 11:00:43.760725 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tread tag 0 fid 1 offset 258 count 262120
I1217 11:00:43.760781 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rread tag 0 count 0
I1217 11:00:43.760926 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tclunk tag 0 fid 1
I1217 11:00:43.760955 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rclunk tag 0
I1217 11:00:43.762248 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1217 11:00:43.762303 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rerror tag 0 ename 'file not found' ecode 0
I1217 11:00:44.027487 2989566 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55278 Tclunk tag 0 fid 0
I1217 11:00:44.027547 2989566 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55278 Rclunk tag 0
I1217 11:00:44.028924 2989566 main.go:127] stdlog: ufs.go:147 disconnected
I1217 11:00:44.052847 2989566 out.go:179] * Unmounting /mount-9p ...
I1217 11:00:44.056479 2989566 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1217 11:00:44.065832 2989566 mount.go:180] unmount for /mount-9p ran successfully
I1217 11:00:44.065949 2989566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/.mount-process: {Name:mk7105ab641dadbb6350b898949bc9b69ccefab5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 11:00:44.069082 2989566 out.go:203] 
W1217 11:00:44.072089 2989566 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1217 11:00:44.075029 2989566 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (2.57s)

                                                
                                    
x
+
TestKubernetesUpgrade (798.36s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-452067 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1217 11:28:36.152660 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-452067 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.576071148s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-452067
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-452067: (1.348384015s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-452067 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-452067 status --format={{.Host}}: exit status 7 (69.284725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-452067 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-452067 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (12m33.352934485s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-452067] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-452067" primary control-plane node in "kubernetes-upgrade-452067" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:29:13.309317 3121455 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:29:13.309474 3121455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:29:13.309502 3121455 out.go:374] Setting ErrFile to fd 2...
	I1217 11:29:13.309513 3121455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:29:13.309889 3121455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 11:29:13.310349 3121455 out.go:368] Setting JSON to false
	I1217 11:29:13.311332 3121455 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":61904,"bootTime":1765909050,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 11:29:13.311431 3121455 start.go:143] virtualization:  
	I1217 11:29:13.314637 3121455 out.go:179] * [kubernetes-upgrade-452067] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 11:29:13.317273 3121455 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 11:29:13.317409 3121455 notify.go:221] Checking for updates...
	I1217 11:29:13.322948 3121455 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:29:13.325861 3121455 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 11:29:13.328852 3121455 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 11:29:13.331729 3121455 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 11:29:13.334667 3121455 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:29:13.338039 3121455 config.go:182] Loaded profile config "kubernetes-upgrade-452067": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1217 11:29:13.338639 3121455 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:29:13.374600 3121455 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 11:29:13.374726 3121455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:29:13.435636 3121455 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:29:13.42578529 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:29:13.435739 3121455 docker.go:319] overlay module found
	I1217 11:29:13.438897 3121455 out.go:179] * Using the docker driver based on existing profile
	I1217 11:29:13.441749 3121455 start.go:309] selected driver: docker
	I1217 11:29:13.441772 3121455 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-452067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-452067 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:29:13.441878 3121455 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:29:13.442588 3121455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:29:13.500374 3121455 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:29:13.490890381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:29:13.500802 3121455 cni.go:84] Creating CNI manager for ""
	I1217 11:29:13.500865 3121455 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 11:29:13.500914 3121455 start.go:353] cluster config:
	{Name:kubernetes-upgrade-452067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-452067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:29:13.504149 3121455 out.go:179] * Starting "kubernetes-upgrade-452067" primary control-plane node in "kubernetes-upgrade-452067" cluster
	I1217 11:29:13.506944 3121455 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 11:29:13.509953 3121455 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:29:13.512697 3121455 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 11:29:13.512745 3121455 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1217 11:29:13.512758 3121455 cache.go:65] Caching tarball of preloaded images
	I1217 11:29:13.512779 3121455 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:29:13.512855 3121455 preload.go:238] Found /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 11:29:13.512865 3121455 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1217 11:29:13.512970 3121455 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kubernetes-upgrade-452067/config.json ...
	I1217 11:29:13.532325 3121455 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:29:13.532349 3121455 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 11:29:13.532370 3121455 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:29:13.532401 3121455 start.go:360] acquireMachinesLock for kubernetes-upgrade-452067: {Name:mk58ebe73e83d2e9488c9109e1bc160ae3a58f86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:29:13.532508 3121455 start.go:364] duration metric: took 45.767µs to acquireMachinesLock for "kubernetes-upgrade-452067"
	I1217 11:29:13.532532 3121455 start.go:96] Skipping create...Using existing machine configuration
	I1217 11:29:13.532537 3121455 fix.go:54] fixHost starting: 
	I1217 11:29:13.532817 3121455 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-452067 --format={{.State.Status}}
	I1217 11:29:13.550392 3121455 fix.go:112] recreateIfNeeded on kubernetes-upgrade-452067: state=Stopped err=<nil>
	W1217 11:29:13.550425 3121455 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 11:29:13.553627 3121455 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-452067" ...
	I1217 11:29:13.553717 3121455 cli_runner.go:164] Run: docker start kubernetes-upgrade-452067
	I1217 11:29:13.819968 3121455 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-452067 --format={{.State.Status}}
	I1217 11:29:13.849742 3121455 kic.go:432] container "kubernetes-upgrade-452067" state is running.
	I1217 11:29:13.850141 3121455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-452067
	I1217 11:29:13.884900 3121455 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kubernetes-upgrade-452067/config.json ...
	I1217 11:29:13.885368 3121455 machine.go:94] provisionDockerMachine start ...
	I1217 11:29:13.885458 3121455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-452067
	I1217 11:29:13.920754 3121455 main.go:143] libmachine: Using SSH client type: native
	I1217 11:29:13.920912 3121455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35963 <nil> <nil>}
	I1217 11:29:13.920921 3121455 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:29:13.921958 3121455 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 11:29:17.064460 3121455 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-452067
	
	I1217 11:29:17.064528 3121455 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-452067"
	I1217 11:29:17.064626 3121455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-452067
	I1217 11:29:17.088258 3121455 main.go:143] libmachine: Using SSH client type: native
	I1217 11:29:17.088369 3121455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35963 <nil> <nil>}
	I1217 11:29:17.088393 3121455 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-452067 && echo "kubernetes-upgrade-452067" | sudo tee /etc/hostname
	I1217 11:29:17.246618 3121455 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-452067
	
	I1217 11:29:17.246724 3121455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-452067
	I1217 11:29:17.280957 3121455 main.go:143] libmachine: Using SSH client type: native
	I1217 11:29:17.281083 3121455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35963 <nil> <nil>}
	I1217 11:29:17.281107 3121455 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-452067' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-452067/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-452067' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:29:17.421938 3121455 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:29:17.421970 3121455 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 11:29:17.421997 3121455 ubuntu.go:190] setting up certificates
	I1217 11:29:17.422007 3121455 provision.go:84] configureAuth start
	I1217 11:29:17.422070 3121455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-452067
	I1217 11:29:17.461234 3121455 provision.go:143] copyHostCerts
	I1217 11:29:17.461296 3121455 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 11:29:17.461305 3121455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 11:29:17.461381 3121455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 11:29:17.461482 3121455 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 11:29:17.461487 3121455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 11:29:17.461513 3121455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 11:29:17.461583 3121455 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 11:29:17.461588 3121455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 11:29:17.461611 3121455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 11:29:17.461666 3121455 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-452067 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-452067 localhost minikube]
	I1217 11:29:17.705298 3121455 provision.go:177] copyRemoteCerts
	I1217 11:29:17.705413 3121455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:29:17.705487 3121455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-452067
	I1217 11:29:17.740637 3121455 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35963 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/kubernetes-upgrade-452067/id_ed25519 Username:docker}
	I1217 11:29:17.841320 3121455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 11:29:17.864202 3121455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1217 11:29:17.898525 3121455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 11:29:17.940794 3121455 provision.go:87] duration metric: took 518.759198ms to configureAuth
	I1217 11:29:17.940823 3121455 ubuntu.go:206] setting minikube options for container-runtime
	I1217 11:29:17.941034 3121455 config.go:182] Loaded profile config "kubernetes-upgrade-452067": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:29:17.941048 3121455 machine.go:97] duration metric: took 4.055668571s to provisionDockerMachine
	I1217 11:29:17.941057 3121455 start.go:293] postStartSetup for "kubernetes-upgrade-452067" (driver="docker")
	I1217 11:29:17.941071 3121455 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 11:29:17.941133 3121455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 11:29:17.941188 3121455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-452067
	I1217 11:29:17.976048 3121455 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35963 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/kubernetes-upgrade-452067/id_ed25519 Username:docker}
	I1217 11:29:18.101806 3121455 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 11:29:18.106234 3121455 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 11:29:18.106261 3121455 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 11:29:18.106273 3121455 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 11:29:18.106329 3121455 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 11:29:18.106431 3121455 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 11:29:18.106535 3121455 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 11:29:18.129880 3121455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 11:29:18.151896 3121455 start.go:296] duration metric: took 210.821147ms for postStartSetup
	I1217 11:29:18.151989 3121455 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:29:18.152044 3121455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-452067
	I1217 11:29:18.175275 3121455 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35963 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/kubernetes-upgrade-452067/id_ed25519 Username:docker}
	I1217 11:29:18.274749 3121455 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 11:29:18.280551 3121455 fix.go:56] duration metric: took 4.748006229s for fixHost
	I1217 11:29:18.280631 3121455 start.go:83] releasing machines lock for "kubernetes-upgrade-452067", held for 4.74810984s
	I1217 11:29:18.280731 3121455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-452067
	I1217 11:29:18.305300 3121455 ssh_runner.go:195] Run: cat /version.json
	I1217 11:29:18.305357 3121455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-452067
	I1217 11:29:18.307832 3121455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 11:29:18.307912 3121455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-452067
	I1217 11:29:18.326490 3121455 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35963 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/kubernetes-upgrade-452067/id_ed25519 Username:docker}
	I1217 11:29:18.343574 3121455 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35963 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/kubernetes-upgrade-452067/id_ed25519 Username:docker}
	I1217 11:29:18.424108 3121455 ssh_runner.go:195] Run: systemctl --version
	I1217 11:29:18.545334 3121455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 11:29:18.549930 3121455 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 11:29:18.550013 3121455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 11:29:18.558980 3121455 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 11:29:18.559007 3121455 start.go:496] detecting cgroup driver to use...
	I1217 11:29:18.559041 3121455 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 11:29:18.559086 3121455 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 11:29:18.577527 3121455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 11:29:18.591103 3121455 docker.go:218] disabling cri-docker service (if available) ...
	I1217 11:29:18.591182 3121455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 11:29:18.606939 3121455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 11:29:18.620558 3121455 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 11:29:18.779812 3121455 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 11:29:18.934382 3121455 docker.go:234] disabling docker service ...
	I1217 11:29:18.934492 3121455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 11:29:18.949656 3121455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 11:29:18.964693 3121455 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 11:29:19.107476 3121455 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 11:29:19.240647 3121455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 11:29:19.254040 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 11:29:19.270510 3121455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 11:29:19.280827 3121455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 11:29:19.289771 3121455 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 11:29:19.289873 3121455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 11:29:19.299967 3121455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 11:29:19.310656 3121455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 11:29:19.324955 3121455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 11:29:19.335015 3121455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 11:29:19.344720 3121455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 11:29:19.354870 3121455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 11:29:19.367352 3121455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 11:29:19.377034 3121455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 11:29:19.385445 3121455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 11:29:19.393314 3121455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:29:19.519139 3121455 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 11:29:19.674322 3121455 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 11:29:19.674459 3121455 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 11:29:19.679090 3121455 start.go:564] Will wait 60s for crictl version
	I1217 11:29:19.679157 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:29:19.685111 3121455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 11:29:19.722101 3121455 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 11:29:19.722188 3121455 ssh_runner.go:195] Run: containerd --version
	I1217 11:29:19.749345 3121455 ssh_runner.go:195] Run: containerd --version
	I1217 11:29:19.782977 3121455 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1217 11:29:19.785962 3121455 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-452067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:29:19.806809 3121455 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 11:29:19.811301 3121455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:29:19.822010 3121455 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-452067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-452067 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:29:19.822123 3121455 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 11:29:19.822192 3121455 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:29:19.850563 3121455 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1217 11:29:19.850637 3121455 ssh_runner.go:195] Run: which lz4
	I1217 11:29:19.854786 3121455 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1217 11:29:19.863642 3121455 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1217 11:29:19.863682 3121455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (305659384 bytes)
	I1217 11:29:23.380532 3121455 containerd.go:563] duration metric: took 3.52578314s to copy over tarball
	I1217 11:29:23.380611 3121455 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1217 11:29:25.771415 3121455 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.390774764s)
	I1217 11:29:25.771481 3121455 kubeadm.go:910] preload failed, will try to load cached images: extracting tarball: 
	** stderr ** 
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	
	** /stderr **: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: Process exited with status 2
	stdout:
	
	stderr:
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	I1217 11:29:25.771563 3121455 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:29:25.799119 3121455 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1217 11:29:25.799144 3121455 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1217 11:29:25.799198 3121455 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:29:25.799421 3121455 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:29:25.799538 3121455 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 11:29:25.799900 3121455 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:29:25.799990 3121455 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 11:29:25.800081 3121455 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1217 11:29:25.800172 3121455 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1217 11:29:25.800274 3121455 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 11:29:25.803194 3121455 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:29:25.803580 3121455 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 11:29:25.803725 3121455 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:29:25.805163 3121455 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 11:29:25.805524 3121455 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1217 11:29:25.805729 3121455 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 11:29:25.805918 3121455 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1217 11:29:25.806143 3121455 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:29:26.152398 3121455 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1217 11:29:26.152528 3121455 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1217 11:29:26.166027 3121455 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-rc.1" and sha "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e"
	I1217 11:29:26.166105 3121455 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 11:29:26.177684 3121455 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.6-0" and sha "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57"
	I1217 11:29:26.177782 3121455 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.6-0
	I1217 11:29:26.248496 3121455 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1217 11:29:26.248605 3121455 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1217 11:29:26.257430 3121455 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" and sha "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a"
	I1217 11:29:26.257509 3121455 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 11:29:26.285627 3121455 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" and sha "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54"
	I1217 11:29:26.285721 3121455 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:29:26.354639 3121455 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" and sha "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde"
	I1217 11:29:26.354759 3121455 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:29:26.354846 3121455 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1217 11:29:26.354892 3121455 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1217 11:29:26.354916 3121455 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e" in container runtime
	I1217 11:29:26.354988 3121455 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 11:29:26.355016 3121455 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I1217 11:29:26.355038 3121455 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1217 11:29:26.355067 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:29:26.355096 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:29:26.354944 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:29:26.355194 3121455 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a" in container runtime
	I1217 11:29:26.355217 3121455 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 11:29:26.355252 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:29:26.355149 3121455 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1217 11:29:26.355303 3121455 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 11:29:26.355337 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:29:26.363649 3121455 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54" in container runtime
	I1217 11:29:26.363696 3121455 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:29:26.363753 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:29:26.390023 3121455 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde" in container runtime
	I1217 11:29:26.390309 3121455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:29:26.390316 3121455 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:29:26.390417 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:29:26.390205 3121455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1217 11:29:26.390237 3121455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 11:29:26.390258 3121455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 11:29:26.390287 3121455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 11:29:26.390178 3121455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 11:29:26.493740 3121455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 11:29:26.493869 3121455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 11:29:26.493959 3121455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 11:29:26.494019 3121455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 11:29:26.494057 3121455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:29:26.494097 3121455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1217 11:29:26.494139 3121455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:29:26.614962 3121455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:29:26.615067 3121455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 11:29:26.615073 3121455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 11:29:26.615146 3121455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 11:29:26.615198 3121455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 11:29:26.615282 3121455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:29:26.615343 3121455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1217 11:29:26.739096 3121455 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1217 11:29:26.739205 3121455 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1217 11:29:26.739276 3121455 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1217 11:29:26.739343 3121455 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1217 11:29:26.739406 3121455 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1217 11:29:26.739437 3121455 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1217 11:29:26.739523 3121455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:29:26.742165 3121455 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1217 11:29:26.774217 3121455 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1217 11:29:26.774342 3121455 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1217 11:29:26.774391 3121455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1217 11:29:26.799853 3121455 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1217 11:29:26.799921 3121455 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	W1217 11:29:27.134438 3121455 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1217 11:29:27.134611 3121455 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1217 11:29:27.134676 3121455 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:29:27.182955 3121455 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1217 11:29:27.183021 3121455 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:29:27.183090 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:29:27.187428 3121455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:29:27.326309 3121455 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1217 11:29:27.326417 3121455 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1217 11:29:27.331560 3121455 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1217 11:29:27.331597 3121455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1217 11:29:27.416880 3121455 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1217 11:29:27.417003 3121455 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1217 11:29:27.896330 3121455 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1217 11:29:27.896390 3121455 cache_images.go:94] duration metric: took 2.097233607s to LoadCachedImages
	W1217 11:29:27.896505 3121455 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1: no such file or directory
	I1217 11:29:27.896515 3121455 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1217 11:29:27.896616 3121455 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-452067 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-452067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 11:29:27.896693 3121455 ssh_runner.go:195] Run: sudo crictl info
	I1217 11:29:27.922677 3121455 cni.go:84] Creating CNI manager for ""
	I1217 11:29:27.922705 3121455 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 11:29:27.922724 3121455 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 11:29:27.922746 3121455 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-452067 NodeName:kubernetes-upgrade-452067 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:29:27.922869 3121455 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-452067"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:29:27.922947 3121455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 11:29:27.930916 3121455 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 11:29:27.930984 3121455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:29:27.939260 3121455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
	I1217 11:29:27.952954 3121455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 11:29:27.966391 3121455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2243 bytes)
	I1217 11:29:27.982392 3121455 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:29:27.987170 3121455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:29:28.003845 3121455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:29:28.230378 3121455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:29:28.255518 3121455 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kubernetes-upgrade-452067 for IP: 192.168.76.2
	I1217 11:29:28.255535 3121455 certs.go:195] generating shared ca certs ...
	I1217 11:29:28.255551 3121455 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:29:28.255711 3121455 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 11:29:28.255762 3121455 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 11:29:28.255769 3121455 certs.go:257] generating profile certs ...
	I1217 11:29:28.255883 3121455 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kubernetes-upgrade-452067/client.key
	I1217 11:29:28.255944 3121455 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kubernetes-upgrade-452067/apiserver.key.3f7832ca
	I1217 11:29:28.256008 3121455 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kubernetes-upgrade-452067/proxy-client.key
	I1217 11:29:28.256132 3121455 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 11:29:28.256169 3121455 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 11:29:28.256181 3121455 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:29:28.256209 3121455 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 11:29:28.256235 3121455 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:29:28.256260 3121455 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 11:29:28.256306 3121455 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 11:29:28.256915 3121455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:29:28.292995 3121455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:29:28.380951 3121455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:29:28.404002 3121455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 11:29:28.425400 3121455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kubernetes-upgrade-452067/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1217 11:29:28.445135 3121455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kubernetes-upgrade-452067/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 11:29:28.466298 3121455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kubernetes-upgrade-452067/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:29:28.486066 3121455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kubernetes-upgrade-452067/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 11:29:28.505887 3121455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 11:29:28.526278 3121455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:29:28.546770 3121455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 11:29:28.566656 3121455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:29:28.580641 3121455 ssh_runner.go:195] Run: openssl version
	I1217 11:29:28.590461 3121455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:29:28.599269 3121455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:29:28.608562 3121455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:29:28.614313 3121455 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:29:28.614400 3121455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:29:28.658491 3121455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:29:28.666274 3121455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 11:29:28.682227 3121455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 11:29:28.693911 3121455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 11:29:28.697756 3121455 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 11:29:28.697843 3121455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 11:29:28.739526 3121455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:29:28.747875 3121455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 11:29:28.757965 3121455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 11:29:28.767702 3121455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 11:29:28.772326 3121455 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 11:29:28.772394 3121455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 11:29:28.828046 3121455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:29:28.836265 3121455 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:29:28.841265 3121455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 11:29:28.886788 3121455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 11:29:28.930338 3121455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 11:29:28.972553 3121455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 11:29:29.018251 3121455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 11:29:29.089821 3121455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 11:29:29.140757 3121455 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-452067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-452067 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:29:29.140838 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 11:29:29.140900 3121455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:29:29.190545 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:29:29.190565 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:29:29.190570 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:29:29.190574 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:29:29.190577 3121455 cri.go:89] found id: ""
	I1217 11:29:29.190626 3121455 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1217 11:29:29.222285 3121455 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T11:29:29Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1217 11:29:29.222361 3121455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:29:29.234521 3121455 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 11:29:29.234550 3121455 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 11:29:29.234609 3121455 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 11:29:29.243341 3121455 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 11:29:29.243719 3121455 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-452067" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 11:29:29.243810 3121455 kubeconfig.go:62] /home/jenkins/minikube-integration/22182-2922712/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-452067" cluster setting kubeconfig missing "kubernetes-upgrade-452067" context setting]
	I1217 11:29:29.244102 3121455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:29:29.244831 3121455 kapi.go:59] client config for kubernetes-upgrade-452067: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kubernetes-upgrade-452067/client.crt", KeyFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kubernetes-upgrade-452067/client.key", CAFile:"/home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 11:29:29.245410 3121455 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 11:29:29.245425 3121455 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 11:29:29.245431 3121455 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 11:29:29.245435 3121455 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 11:29:29.245439 3121455 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 11:29:29.245710 3121455 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 11:29:29.255736 3121455 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 11:28:52.318666432 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 11:29:27.977821634 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///run/containerd/containerd.sock
	   name: "kubernetes-upgrade-452067"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-rc.1
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1217 11:29:29.255759 3121455 kubeadm.go:1161] stopping kube-system containers ...
	I1217 11:29:29.255774 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1217 11:29:29.255825 3121455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:29:29.300639 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:29:29.300659 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:29:29.300671 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:29:29.300676 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:29:29.300679 3121455 cri.go:89] found id: ""
	I1217 11:29:29.300684 3121455 cri.go:252] Stopping containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:29:29.300740 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:29:29.306348 3121455 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249
	I1217 11:29:29.348764 3121455 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 11:29:29.368103 3121455 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:29:29.379141 3121455 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec 17 11:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Dec 17 11:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 17 11:29 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec 17 11:28 /etc/kubernetes/scheduler.conf
	
	I1217 11:29:29.379211 3121455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 11:29:29.388780 3121455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 11:29:29.398249 3121455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 11:29:29.408383 3121455 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 11:29:29.408479 3121455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:29:29.415994 3121455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 11:29:29.424877 3121455 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 11:29:29.424942 3121455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:29:29.433276 3121455 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 11:29:29.442850 3121455 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 11:29:29.496703 3121455 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 11:29:30.857022 3121455 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.360282327s)
	I1217 11:29:30.857094 3121455 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 11:29:31.110964 3121455 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 11:29:31.218738 3121455 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 11:29:31.311842 3121455 api_server.go:52] waiting for apiserver process to appear ...
	I1217 11:29:31.311916 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:31.812020 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:32.312066 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:32.812988 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:33.312292 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:33.812108 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:34.312556 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:34.812740 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:35.312066 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:35.812851 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:36.312559 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:36.812039 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:37.311986 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:37.812597 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:38.312412 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:38.812044 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:39.312719 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:39.812869 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:40.312061 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:40.812222 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:41.312525 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:41.813045 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:42.312049 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:42.812475 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:43.312901 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:43.812084 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:44.312047 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:44.812041 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:45.312992 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:45.812770 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:46.312081 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:46.812044 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:47.312200 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:47.812157 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:48.312316 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:48.812814 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:49.312212 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:49.812385 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:50.312568 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:50.812783 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:51.312092 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:51.812741 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:52.312150 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:52.812203 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:53.313388 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:53.812037 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:54.312601 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:54.812240 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:55.312743 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:55.812205 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:56.312099 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:56.812546 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:57.312899 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:57.812893 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:58.312663 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:58.812117 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:59.312081 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:29:59.812365 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:00.312885 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:00.812267 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:01.312145 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:01.812091 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:02.312133 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:02.812228 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:03.312942 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:03.812036 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:04.312390 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:04.812587 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:05.312991 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:05.813003 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:06.313033 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:06.812538 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:07.312793 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:07.812819 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:08.312472 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:08.812489 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:09.312989 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:09.812104 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:10.312702 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:10.812296 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:11.312866 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:11.812364 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:12.312490 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:12.812594 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:13.312636 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:13.812819 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:14.312113 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:14.812099 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:15.312161 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:15.812116 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:16.312189 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:16.812945 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:17.312982 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:17.813074 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:18.312236 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:18.812136 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:19.312568 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:19.812715 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:20.312555 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:20.812094 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:21.313019 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:21.812818 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:22.312776 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:22.812276 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:23.312032 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:23.812281 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:24.312019 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:24.812075 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:25.312398 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:25.812243 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:26.312077 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:26.812311 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:27.312686 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:27.812118 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:28.312064 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:28.812453 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:29.312391 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:29.812163 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:30.312072 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:30.812114 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:31.312117 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:30:31.312208 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:30:31.337539 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:30:31.337561 3121455 cri.go:89] found id: ""
	I1217 11:30:31.337569 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:30:31.337628 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:31.341095 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:30:31.341164 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:30:31.365538 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:30:31.365562 3121455 cri.go:89] found id: ""
	I1217 11:30:31.365571 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:30:31.365625 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:31.368953 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:30:31.369021 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:30:31.394141 3121455 cri.go:89] found id: ""
	I1217 11:30:31.394164 3121455 logs.go:282] 0 containers: []
	W1217 11:30:31.394173 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:30:31.394179 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:30:31.394244 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:30:31.420375 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:30:31.420399 3121455 cri.go:89] found id: ""
	I1217 11:30:31.420408 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:30:31.420512 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:31.424289 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:30:31.424356 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:30:31.448906 3121455 cri.go:89] found id: ""
	I1217 11:30:31.448930 3121455 logs.go:282] 0 containers: []
	W1217 11:30:31.448939 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:30:31.448946 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:30:31.449004 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:30:31.474942 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:30:31.474966 3121455 cri.go:89] found id: ""
	I1217 11:30:31.474975 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:30:31.475035 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:31.478789 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:30:31.478887 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:30:31.503210 3121455 cri.go:89] found id: ""
	I1217 11:30:31.503239 3121455 logs.go:282] 0 containers: []
	W1217 11:30:31.503248 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:30:31.503254 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:30:31.503312 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:30:31.527757 3121455 cri.go:89] found id: ""
	I1217 11:30:31.527782 3121455 logs.go:282] 0 containers: []
	W1217 11:30:31.527791 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:30:31.527804 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:30:31.527817 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:30:31.562194 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:30:31.562226 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:30:31.613250 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:30:31.613280 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:30:31.643894 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:30:31.643929 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:30:31.680666 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:30:31.680696 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:30:31.697841 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:30:31.697871 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:30:31.776067 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:30:31.776087 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:30:31.776132 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:30:31.831127 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:30:31.831161 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:30:31.881077 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:30:31.881110 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:30:34.443928 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:34.454423 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:30:34.454496 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:30:34.480911 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:30:34.480935 3121455 cri.go:89] found id: ""
	I1217 11:30:34.480944 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:30:34.481004 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:34.484869 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:30:34.484959 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:30:34.510764 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:30:34.510830 3121455 cri.go:89] found id: ""
	I1217 11:30:34.510855 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:30:34.510944 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:34.514703 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:30:34.514828 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:30:34.539821 3121455 cri.go:89] found id: ""
	I1217 11:30:34.539891 3121455 logs.go:282] 0 containers: []
	W1217 11:30:34.539916 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:30:34.539936 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:30:34.540020 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:30:34.565155 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:30:34.565179 3121455 cri.go:89] found id: ""
	I1217 11:30:34.565187 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:30:34.565265 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:34.569002 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:30:34.569074 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:30:34.595099 3121455 cri.go:89] found id: ""
	I1217 11:30:34.595122 3121455 logs.go:282] 0 containers: []
	W1217 11:30:34.595131 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:30:34.595137 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:30:34.595196 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:30:34.620656 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:30:34.620677 3121455 cri.go:89] found id: ""
	I1217 11:30:34.620687 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:30:34.620761 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:34.624585 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:30:34.624682 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:30:34.649609 3121455 cri.go:89] found id: ""
	I1217 11:30:34.649682 3121455 logs.go:282] 0 containers: []
	W1217 11:30:34.649707 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:30:34.649729 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:30:34.649846 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:30:34.675327 3121455 cri.go:89] found id: ""
	I1217 11:30:34.675350 3121455 logs.go:282] 0 containers: []
	W1217 11:30:34.675375 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:30:34.675390 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:30:34.675404 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:30:34.743748 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:30:34.743776 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:30:34.743790 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:30:34.796854 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:30:34.798822 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:30:34.847630 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:30:34.847660 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:30:34.877508 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:30:34.877542 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:30:34.905868 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:30:34.905896 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:30:34.966642 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:30:34.966679 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:30:34.983076 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:30:34.983106 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:30:35.017981 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:30:35.018021 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:30:37.559550 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:37.569766 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:30:37.569837 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:30:37.594745 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:30:37.594767 3121455 cri.go:89] found id: ""
	I1217 11:30:37.594776 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:30:37.594833 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:37.598570 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:30:37.598643 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:30:37.624668 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:30:37.624695 3121455 cri.go:89] found id: ""
	I1217 11:30:37.624703 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:30:37.624759 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:37.628229 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:30:37.628298 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:30:37.658926 3121455 cri.go:89] found id: ""
	I1217 11:30:37.658950 3121455 logs.go:282] 0 containers: []
	W1217 11:30:37.658960 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:30:37.658967 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:30:37.659033 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:30:37.690605 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:30:37.690631 3121455 cri.go:89] found id: ""
	I1217 11:30:37.690640 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:30:37.690699 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:37.694484 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:30:37.694565 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:30:37.720621 3121455 cri.go:89] found id: ""
	I1217 11:30:37.720652 3121455 logs.go:282] 0 containers: []
	W1217 11:30:37.720662 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:30:37.720668 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:30:37.720727 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:30:37.746813 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:30:37.746838 3121455 cri.go:89] found id: ""
	I1217 11:30:37.746847 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:30:37.746915 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:37.757430 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:30:37.757507 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:30:37.784691 3121455 cri.go:89] found id: ""
	I1217 11:30:37.784716 3121455 logs.go:282] 0 containers: []
	W1217 11:30:37.784725 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:30:37.784732 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:30:37.784799 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:30:37.816220 3121455 cri.go:89] found id: ""
	I1217 11:30:37.816242 3121455 logs.go:282] 0 containers: []
	W1217 11:30:37.816250 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:30:37.816264 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:30:37.816275 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:30:37.877077 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:30:37.877112 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:30:37.909293 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:30:37.909326 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:30:37.944797 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:30:37.944830 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:30:37.961057 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:30:37.961088 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:30:38.033996 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:30:38.034026 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:30:38.034041 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:30:38.080226 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:30:38.080260 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:30:38.122220 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:30:38.122254 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:30:38.159038 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:30:38.159069 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:30:40.698132 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:40.708653 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:30:40.708729 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:30:40.738161 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:30:40.738182 3121455 cri.go:89] found id: ""
	I1217 11:30:40.738190 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:30:40.738259 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:40.741960 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:30:40.742038 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:30:40.776180 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:30:40.776251 3121455 cri.go:89] found id: ""
	I1217 11:30:40.776274 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:30:40.776356 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:40.780298 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:30:40.780368 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:30:40.814225 3121455 cri.go:89] found id: ""
	I1217 11:30:40.814249 3121455 logs.go:282] 0 containers: []
	W1217 11:30:40.814257 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:30:40.814263 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:30:40.814320 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:30:40.840163 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:30:40.840183 3121455 cri.go:89] found id: ""
	I1217 11:30:40.840191 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:30:40.840247 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:40.844231 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:30:40.844348 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:30:40.870665 3121455 cri.go:89] found id: ""
	I1217 11:30:40.870734 3121455 logs.go:282] 0 containers: []
	W1217 11:30:40.870758 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:30:40.870782 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:30:40.870879 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:30:40.896975 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:30:40.896998 3121455 cri.go:89] found id: ""
	I1217 11:30:40.897007 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:30:40.897064 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:40.900758 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:30:40.900844 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:30:40.924564 3121455 cri.go:89] found id: ""
	I1217 11:30:40.924647 3121455 logs.go:282] 0 containers: []
	W1217 11:30:40.924664 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:30:40.924671 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:30:40.924735 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:30:40.950336 3121455 cri.go:89] found id: ""
	I1217 11:30:40.950364 3121455 logs.go:282] 0 containers: []
	W1217 11:30:40.950373 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:30:40.950389 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:30:40.950400 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:30:41.006682 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:30:41.006719 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:30:41.047259 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:30:41.047291 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:30:41.079312 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:30:41.079344 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:30:41.109078 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:30:41.109109 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:30:41.138598 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:30:41.138628 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:30:41.154685 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:30:41.154712 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:30:41.222455 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:30:41.222526 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:30:41.222549 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:30:41.257901 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:30:41.257933 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:30:43.793388 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:43.803765 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:30:43.803833 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:30:43.833284 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:30:43.833307 3121455 cri.go:89] found id: ""
	I1217 11:30:43.833327 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:30:43.833386 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:43.837006 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:30:43.837117 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:30:43.875029 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:30:43.875055 3121455 cri.go:89] found id: ""
	I1217 11:30:43.875063 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:30:43.875117 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:43.878851 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:30:43.878924 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:30:43.903503 3121455 cri.go:89] found id: ""
	I1217 11:30:43.903524 3121455 logs.go:282] 0 containers: []
	W1217 11:30:43.903533 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:30:43.903539 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:30:43.903596 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:30:43.927249 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:30:43.927270 3121455 cri.go:89] found id: ""
	I1217 11:30:43.927278 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:30:43.927331 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:43.930864 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:30:43.930944 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:30:43.955423 3121455 cri.go:89] found id: ""
	I1217 11:30:43.955443 3121455 logs.go:282] 0 containers: []
	W1217 11:30:43.955452 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:30:43.955462 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:30:43.955519 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:30:43.982632 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:30:43.982655 3121455 cri.go:89] found id: ""
	I1217 11:30:43.982663 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:30:43.982718 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:43.986236 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:30:43.986325 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:30:44.014383 3121455 cri.go:89] found id: ""
	I1217 11:30:44.014460 3121455 logs.go:282] 0 containers: []
	W1217 11:30:44.014477 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:30:44.014485 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:30:44.014547 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:30:44.042673 3121455 cri.go:89] found id: ""
	I1217 11:30:44.042700 3121455 logs.go:282] 0 containers: []
	W1217 11:30:44.042709 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:30:44.042726 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:30:44.042738 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:30:44.059428 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:30:44.059502 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:30:44.123213 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:30:44.123233 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:30:44.123247 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:30:44.156707 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:30:44.156737 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:30:44.189217 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:30:44.189256 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:30:44.217947 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:30:44.218022 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:30:44.282609 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:30:44.282644 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:30:44.315690 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:30:44.315719 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:30:44.351213 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:30:44.351243 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:30:46.880773 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:46.890707 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:30:46.890780 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:30:46.915604 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:30:46.915628 3121455 cri.go:89] found id: ""
	I1217 11:30:46.915637 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:30:46.915694 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:46.919372 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:30:46.919442 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:30:46.945649 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:30:46.945671 3121455 cri.go:89] found id: ""
	I1217 11:30:46.945679 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:30:46.945736 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:46.949349 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:30:46.949438 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:30:46.973633 3121455 cri.go:89] found id: ""
	I1217 11:30:46.973658 3121455 logs.go:282] 0 containers: []
	W1217 11:30:46.973667 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:30:46.973673 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:30:46.973734 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:30:47.002141 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:30:47.002165 3121455 cri.go:89] found id: ""
	I1217 11:30:47.002173 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:30:47.002237 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:47.006350 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:30:47.006479 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:30:47.038713 3121455 cri.go:89] found id: ""
	I1217 11:30:47.038738 3121455 logs.go:282] 0 containers: []
	W1217 11:30:47.038748 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:30:47.038757 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:30:47.038816 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:30:47.064063 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:30:47.064086 3121455 cri.go:89] found id: ""
	I1217 11:30:47.064095 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:30:47.064151 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:47.068023 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:30:47.068099 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:30:47.099376 3121455 cri.go:89] found id: ""
	I1217 11:30:47.099398 3121455 logs.go:282] 0 containers: []
	W1217 11:30:47.099407 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:30:47.099413 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:30:47.099476 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:30:47.124873 3121455 cri.go:89] found id: ""
	I1217 11:30:47.124896 3121455 logs.go:282] 0 containers: []
	W1217 11:30:47.124904 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:30:47.124918 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:30:47.124933 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:30:47.193521 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:30:47.193543 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:30:47.193559 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:30:47.226215 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:30:47.226249 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:30:47.258455 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:30:47.258488 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:30:47.289010 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:30:47.289046 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:30:47.350265 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:30:47.350299 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:30:47.367020 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:30:47.367049 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:30:47.403464 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:30:47.403495 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:30:47.438939 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:30:47.438980 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:30:49.973361 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:49.983788 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:30:49.983858 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:30:50.015729 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:30:50.015802 3121455 cri.go:89] found id: ""
	I1217 11:30:50.015828 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:30:50.015917 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:50.020001 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:30:50.020084 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:30:50.048294 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:30:50.048317 3121455 cri.go:89] found id: ""
	I1217 11:30:50.048326 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:30:50.048410 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:50.052644 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:30:50.052724 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:30:50.079005 3121455 cri.go:89] found id: ""
	I1217 11:30:50.079032 3121455 logs.go:282] 0 containers: []
	W1217 11:30:50.079041 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:30:50.079048 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:30:50.079117 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:30:50.105501 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:30:50.105533 3121455 cri.go:89] found id: ""
	I1217 11:30:50.105544 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:30:50.105623 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:50.109476 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:30:50.109570 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:30:50.134717 3121455 cri.go:89] found id: ""
	I1217 11:30:50.134740 3121455 logs.go:282] 0 containers: []
	W1217 11:30:50.134749 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:30:50.134756 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:30:50.134814 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:30:50.162974 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:30:50.162998 3121455 cri.go:89] found id: ""
	I1217 11:30:50.163006 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:30:50.163061 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:50.166790 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:30:50.166863 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:30:50.195439 3121455 cri.go:89] found id: ""
	I1217 11:30:50.195466 3121455 logs.go:282] 0 containers: []
	W1217 11:30:50.195476 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:30:50.195483 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:30:50.195546 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:30:50.220577 3121455 cri.go:89] found id: ""
	I1217 11:30:50.220604 3121455 logs.go:282] 0 containers: []
	W1217 11:30:50.220614 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:30:50.220627 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:30:50.220647 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:30:50.254167 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:30:50.254197 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:30:50.290190 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:30:50.290226 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:30:50.327278 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:30:50.327314 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:30:50.369123 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:30:50.369164 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:30:50.401200 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:30:50.401243 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:30:50.430945 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:30:50.430972 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:30:50.491819 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:30:50.491853 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:30:50.510953 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:30:50.510982 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:30:50.591949 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:30:53.092551 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:53.104095 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:30:53.104179 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:30:53.131557 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:30:53.131580 3121455 cri.go:89] found id: ""
	I1217 11:30:53.131589 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:30:53.131646 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:53.135522 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:30:53.135604 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:30:53.160275 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:30:53.160299 3121455 cri.go:89] found id: ""
	I1217 11:30:53.160308 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:30:53.160365 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:53.164209 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:30:53.164283 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:30:53.189728 3121455 cri.go:89] found id: ""
	I1217 11:30:53.189753 3121455 logs.go:282] 0 containers: []
	W1217 11:30:53.189763 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:30:53.189770 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:30:53.189827 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:30:53.215612 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:30:53.215636 3121455 cri.go:89] found id: ""
	I1217 11:30:53.215645 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:30:53.215706 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:53.219278 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:30:53.219355 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:30:53.244260 3121455 cri.go:89] found id: ""
	I1217 11:30:53.244283 3121455 logs.go:282] 0 containers: []
	W1217 11:30:53.244292 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:30:53.244299 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:30:53.244370 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:30:53.272831 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:30:53.272857 3121455 cri.go:89] found id: ""
	I1217 11:30:53.272865 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:30:53.272942 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:53.276950 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:30:53.277076 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:30:53.303258 3121455 cri.go:89] found id: ""
	I1217 11:30:53.303325 3121455 logs.go:282] 0 containers: []
	W1217 11:30:53.303351 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:30:53.303372 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:30:53.303449 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:30:53.328380 3121455 cri.go:89] found id: ""
	I1217 11:30:53.328480 3121455 logs.go:282] 0 containers: []
	W1217 11:30:53.328513 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:30:53.328541 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:30:53.328571 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:30:53.370696 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:30:53.370722 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:30:53.430932 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:30:53.430966 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:30:53.468701 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:30:53.468731 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:30:53.502207 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:30:53.502239 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:30:53.541530 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:30:53.541563 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:30:53.575443 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:30:53.575477 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:30:53.592059 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:30:53.592093 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:30:53.657756 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:30:53.657776 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:30:53.657789 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:30:56.193003 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:56.203317 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:30:56.203433 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:30:56.231060 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:30:56.231081 3121455 cri.go:89] found id: ""
	I1217 11:30:56.231089 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:30:56.231145 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:56.234966 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:30:56.235040 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:30:56.259925 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:30:56.259951 3121455 cri.go:89] found id: ""
	I1217 11:30:56.259959 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:30:56.260014 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:56.263566 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:30:56.263643 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:30:56.289778 3121455 cri.go:89] found id: ""
	I1217 11:30:56.289807 3121455 logs.go:282] 0 containers: []
	W1217 11:30:56.289816 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:30:56.289830 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:30:56.289900 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:30:56.320474 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:30:56.320498 3121455 cri.go:89] found id: ""
	I1217 11:30:56.320506 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:30:56.320569 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:56.324292 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:30:56.324368 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:30:56.349813 3121455 cri.go:89] found id: ""
	I1217 11:30:56.349836 3121455 logs.go:282] 0 containers: []
	W1217 11:30:56.349844 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:30:56.349851 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:30:56.349911 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:30:56.380174 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:30:56.380198 3121455 cri.go:89] found id: ""
	I1217 11:30:56.380206 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:30:56.380261 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:56.383972 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:30:56.384045 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:30:56.409363 3121455 cri.go:89] found id: ""
	I1217 11:30:56.409400 3121455 logs.go:282] 0 containers: []
	W1217 11:30:56.409410 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:30:56.409417 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:30:56.409487 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:30:56.435693 3121455 cri.go:89] found id: ""
	I1217 11:30:56.435770 3121455 logs.go:282] 0 containers: []
	W1217 11:30:56.435794 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:30:56.435834 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:30:56.435865 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:30:56.452208 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:30:56.452249 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:30:56.486089 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:30:56.486120 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:30:56.553437 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:30:56.553522 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:30:56.622931 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:30:56.622954 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:30:56.622967 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:30:56.660070 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:30:56.660101 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:30:56.696696 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:30:56.696730 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:30:56.733158 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:30:56.733192 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:30:56.763881 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:30:56.763917 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:30:59.314388 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:30:59.325118 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:30:59.325202 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:30:59.355376 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:30:59.355416 3121455 cri.go:89] found id: ""
	I1217 11:30:59.355424 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:30:59.355506 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:59.359782 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:30:59.359856 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:30:59.389131 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:30:59.389152 3121455 cri.go:89] found id: ""
	I1217 11:30:59.389160 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:30:59.389217 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:59.393091 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:30:59.393174 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:30:59.417760 3121455 cri.go:89] found id: ""
	I1217 11:30:59.417784 3121455 logs.go:282] 0 containers: []
	W1217 11:30:59.417792 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:30:59.417799 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:30:59.417859 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:30:59.443077 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:30:59.443099 3121455 cri.go:89] found id: ""
	I1217 11:30:59.443108 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:30:59.443164 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:59.446857 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:30:59.446967 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:30:59.472590 3121455 cri.go:89] found id: ""
	I1217 11:30:59.472616 3121455 logs.go:282] 0 containers: []
	W1217 11:30:59.472641 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:30:59.472648 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:30:59.472713 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:30:59.502998 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:30:59.503021 3121455 cri.go:89] found id: ""
	I1217 11:30:59.503030 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:30:59.503085 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:30:59.507982 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:30:59.508056 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:30:59.534728 3121455 cri.go:89] found id: ""
	I1217 11:30:59.534754 3121455 logs.go:282] 0 containers: []
	W1217 11:30:59.534763 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:30:59.534769 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:30:59.534828 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:30:59.565385 3121455 cri.go:89] found id: ""
	I1217 11:30:59.565410 3121455 logs.go:282] 0 containers: []
	W1217 11:30:59.565419 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:30:59.565434 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:30:59.565457 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:30:59.606492 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:30:59.606523 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:30:59.637546 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:30:59.637582 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:30:59.699909 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:30:59.699947 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:30:59.765921 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:30:59.765944 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:30:59.765958 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:30:59.802818 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:30:59.802853 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:30:59.850869 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:30:59.850902 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:30:59.888770 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:30:59.888801 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:30:59.904843 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:30:59.904879 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:02.440549 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:31:02.451156 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:31:02.451228 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:31:02.478336 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:02.478361 3121455 cri.go:89] found id: ""
	I1217 11:31:02.478369 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:31:02.478432 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:02.482068 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:31:02.482157 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:31:02.513918 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:02.513944 3121455 cri.go:89] found id: ""
	I1217 11:31:02.513954 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:31:02.514012 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:02.519361 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:31:02.519442 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:31:02.547072 3121455 cri.go:89] found id: ""
	I1217 11:31:02.547111 3121455 logs.go:282] 0 containers: []
	W1217 11:31:02.547120 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:31:02.547127 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:31:02.547203 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:31:02.582372 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:02.582394 3121455 cri.go:89] found id: ""
	I1217 11:31:02.582403 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:31:02.582459 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:02.586215 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:31:02.586303 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:31:02.611529 3121455 cri.go:89] found id: ""
	I1217 11:31:02.611557 3121455 logs.go:282] 0 containers: []
	W1217 11:31:02.611567 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:31:02.611574 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:31:02.611657 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:31:02.641450 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:02.641474 3121455 cri.go:89] found id: ""
	I1217 11:31:02.641482 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:31:02.641567 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:02.645386 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:31:02.645464 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:31:02.673747 3121455 cri.go:89] found id: ""
	I1217 11:31:02.673772 3121455 logs.go:282] 0 containers: []
	W1217 11:31:02.673781 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:31:02.673788 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:31:02.673868 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:31:02.700955 3121455 cri.go:89] found id: ""
	I1217 11:31:02.700977 3121455 logs.go:282] 0 containers: []
	W1217 11:31:02.700987 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:31:02.701016 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:31:02.701032 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:02.734909 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:31:02.734944 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:02.771960 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:31:02.771999 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:31:02.836480 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:31:02.836525 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:31:02.858223 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:31:02.858255 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:31:02.927521 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:31:02.927541 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:31:02.927554 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:02.962042 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:31:02.962081 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:02.993828 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:31:02.993860 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:31:03.023587 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:31:03.023625 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:31:05.553900 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:31:05.564331 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:31:05.564404 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:31:05.589894 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:05.589916 3121455 cri.go:89] found id: ""
	I1217 11:31:05.589924 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:31:05.589983 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:05.594077 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:31:05.594159 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:31:05.622019 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:05.622042 3121455 cri.go:89] found id: ""
	I1217 11:31:05.622050 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:31:05.622112 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:05.625961 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:31:05.626034 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:31:05.655851 3121455 cri.go:89] found id: ""
	I1217 11:31:05.655877 3121455 logs.go:282] 0 containers: []
	W1217 11:31:05.655886 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:31:05.655892 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:31:05.655970 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:31:05.681355 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:05.681389 3121455 cri.go:89] found id: ""
	I1217 11:31:05.681398 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:31:05.681474 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:05.685408 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:31:05.685482 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:31:05.710339 3121455 cri.go:89] found id: ""
	I1217 11:31:05.710367 3121455 logs.go:282] 0 containers: []
	W1217 11:31:05.710376 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:31:05.710383 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:31:05.710444 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:31:05.739942 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:05.739965 3121455 cri.go:89] found id: ""
	I1217 11:31:05.739974 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:31:05.740032 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:05.743727 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:31:05.743800 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:31:05.769164 3121455 cri.go:89] found id: ""
	I1217 11:31:05.769230 3121455 logs.go:282] 0 containers: []
	W1217 11:31:05.769245 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:31:05.769256 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:31:05.769316 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:31:05.795211 3121455 cri.go:89] found id: ""
	I1217 11:31:05.795237 3121455 logs.go:282] 0 containers: []
	W1217 11:31:05.795246 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:31:05.795261 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:31:05.795275 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:05.829423 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:31:05.829457 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:31:05.861900 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:31:05.861938 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:31:05.878546 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:31:05.878574 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:31:05.944347 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:31:05.944369 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:31:05.944386 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:05.978760 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:31:05.978792 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:06.018054 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:31:06.018088 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:31:06.049683 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:31:06.049716 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:31:06.107410 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:31:06.107444 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:08.640843 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:31:08.652681 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:31:08.652798 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:31:08.685555 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:08.685620 3121455 cri.go:89] found id: ""
	I1217 11:31:08.685645 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:31:08.685735 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:08.691528 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:31:08.691640 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:31:08.728402 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:08.728499 3121455 cri.go:89] found id: ""
	I1217 11:31:08.728523 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:31:08.728627 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:08.732971 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:31:08.733087 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:31:08.760632 3121455 cri.go:89] found id: ""
	I1217 11:31:08.760709 3121455 logs.go:282] 0 containers: []
	W1217 11:31:08.760732 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:31:08.760752 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:31:08.760844 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:31:08.797592 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:08.797665 3121455 cri.go:89] found id: ""
	I1217 11:31:08.797688 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:31:08.797777 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:08.802107 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:31:08.802216 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:31:08.830780 3121455 cri.go:89] found id: ""
	I1217 11:31:08.830854 3121455 logs.go:282] 0 containers: []
	W1217 11:31:08.830878 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:31:08.830900 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:31:08.830994 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:31:08.864038 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:08.864098 3121455 cri.go:89] found id: ""
	I1217 11:31:08.864121 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:31:08.864210 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:08.868608 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:31:08.868737 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:31:08.902267 3121455 cri.go:89] found id: ""
	I1217 11:31:08.902332 3121455 logs.go:282] 0 containers: []
	W1217 11:31:08.902356 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:31:08.902376 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:31:08.902465 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:31:08.938866 3121455 cri.go:89] found id: ""
	I1217 11:31:08.938942 3121455 logs.go:282] 0 containers: []
	W1217 11:31:08.938965 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:31:08.939008 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:31:08.939042 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:08.990979 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:31:08.991019 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:31:09.039422 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:31:09.039455 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:31:09.104185 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:31:09.104219 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:09.140241 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:31:09.140276 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:09.173405 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:31:09.173440 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:31:09.205138 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:31:09.205175 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:31:09.222145 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:31:09.222215 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:31:09.323862 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:31:09.323927 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:31:09.323955 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:11.916982 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:31:11.926931 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:31:11.927003 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:31:11.952350 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:11.952371 3121455 cri.go:89] found id: ""
	I1217 11:31:11.952379 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:31:11.952485 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:11.956241 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:31:11.956312 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:31:11.983657 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:11.983683 3121455 cri.go:89] found id: ""
	I1217 11:31:11.983691 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:31:11.983753 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:11.987257 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:31:11.987327 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:31:12.013261 3121455 cri.go:89] found id: ""
	I1217 11:31:12.013288 3121455 logs.go:282] 0 containers: []
	W1217 11:31:12.013299 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:31:12.013305 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:31:12.013366 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:31:12.040302 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:12.040327 3121455 cri.go:89] found id: ""
	I1217 11:31:12.040336 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:31:12.040394 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:12.044196 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:31:12.044319 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:31:12.069783 3121455 cri.go:89] found id: ""
	I1217 11:31:12.069809 3121455 logs.go:282] 0 containers: []
	W1217 11:31:12.069817 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:31:12.069825 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:31:12.069886 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:31:12.109805 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:12.109879 3121455 cri.go:89] found id: ""
	I1217 11:31:12.109896 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:31:12.109970 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:12.113740 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:31:12.113868 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:31:12.137847 3121455 cri.go:89] found id: ""
	I1217 11:31:12.137923 3121455 logs.go:282] 0 containers: []
	W1217 11:31:12.137947 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:31:12.137968 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:31:12.138039 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:31:12.162620 3121455 cri.go:89] found id: ""
	I1217 11:31:12.162647 3121455 logs.go:282] 0 containers: []
	W1217 11:31:12.162656 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:31:12.162685 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:31:12.162705 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:12.195112 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:31:12.195142 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:31:12.223651 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:31:12.223691 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:31:12.253151 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:31:12.253183 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:31:12.310305 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:31:12.310339 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:31:12.379809 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:31:12.379830 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:31:12.379842 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:12.413375 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:31:12.413410 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:12.448585 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:31:12.448620 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:12.489054 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:31:12.489088 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:31:15.006822 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:31:15.025493 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:31:15.025637 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:31:15.067832 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:15.067857 3121455 cri.go:89] found id: ""
	I1217 11:31:15.067866 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:31:15.067927 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:15.071866 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:31:15.071948 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:31:15.105009 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:15.105062 3121455 cri.go:89] found id: ""
	I1217 11:31:15.105074 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:31:15.105162 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:15.110962 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:31:15.111063 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:31:15.141634 3121455 cri.go:89] found id: ""
	I1217 11:31:15.141660 3121455 logs.go:282] 0 containers: []
	W1217 11:31:15.141669 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:31:15.141676 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:31:15.141766 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:31:15.176623 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:15.176647 3121455 cri.go:89] found id: ""
	I1217 11:31:15.176655 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:31:15.176732 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:15.181060 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:31:15.181162 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:31:15.209327 3121455 cri.go:89] found id: ""
	I1217 11:31:15.209354 3121455 logs.go:282] 0 containers: []
	W1217 11:31:15.209364 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:31:15.209370 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:31:15.209479 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:31:15.247674 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:15.247698 3121455 cri.go:89] found id: ""
	I1217 11:31:15.247706 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:31:15.247798 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:15.251896 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:31:15.252000 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:31:15.282925 3121455 cri.go:89] found id: ""
	I1217 11:31:15.282951 3121455 logs.go:282] 0 containers: []
	W1217 11:31:15.282959 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:31:15.282969 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:31:15.283052 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:31:15.319274 3121455 cri.go:89] found id: ""
	I1217 11:31:15.319307 3121455 logs.go:282] 0 containers: []
	W1217 11:31:15.319316 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:31:15.319361 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:31:15.319379 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:31:15.339321 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:31:15.339348 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:31:15.433024 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:31:15.433045 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:31:15.433058 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:15.504660 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:31:15.504697 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:15.541835 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:31:15.541870 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:15.606481 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:31:15.606517 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:31:15.658956 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:31:15.658984 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:31:15.725693 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:31:15.725765 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:15.760261 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:31:15.760294 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:31:18.302651 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:31:18.315822 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:31:18.315893 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:31:18.342137 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:18.342160 3121455 cri.go:89] found id: ""
	I1217 11:31:18.342169 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:31:18.342231 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:18.345949 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:31:18.346021 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:31:18.371061 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:18.371084 3121455 cri.go:89] found id: ""
	I1217 11:31:18.371093 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:31:18.371150 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:18.374820 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:31:18.374909 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:31:18.401143 3121455 cri.go:89] found id: ""
	I1217 11:31:18.401170 3121455 logs.go:282] 0 containers: []
	W1217 11:31:18.401180 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:31:18.401187 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:31:18.401247 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:31:18.427317 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:18.427340 3121455 cri.go:89] found id: ""
	I1217 11:31:18.427348 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:31:18.427404 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:18.431588 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:31:18.431661 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:31:18.456876 3121455 cri.go:89] found id: ""
	I1217 11:31:18.456914 3121455 logs.go:282] 0 containers: []
	W1217 11:31:18.456925 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:31:18.456932 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:31:18.457001 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:31:18.487606 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:18.487629 3121455 cri.go:89] found id: ""
	I1217 11:31:18.487637 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:31:18.487722 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:18.491433 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:31:18.491506 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:31:18.517043 3121455 cri.go:89] found id: ""
	I1217 11:31:18.517068 3121455 logs.go:282] 0 containers: []
	W1217 11:31:18.517078 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:31:18.517085 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:31:18.517142 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:31:18.548667 3121455 cri.go:89] found id: ""
	I1217 11:31:18.548691 3121455 logs.go:282] 0 containers: []
	W1217 11:31:18.548700 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:31:18.548715 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:31:18.548726 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:31:18.577136 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:31:18.577173 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:31:18.626550 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:31:18.626580 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:31:18.685332 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:31:18.685374 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:31:18.701999 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:31:18.702033 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:18.740888 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:31:18.740920 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:18.778362 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:31:18.778395 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:31:18.851529 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:31:18.851552 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:31:18.851565 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:18.886729 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:31:18.886764 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:21.420769 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:31:21.430941 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:31:21.431008 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:31:21.456183 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:21.456207 3121455 cri.go:89] found id: ""
	I1217 11:31:21.456214 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:31:21.456270 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:21.460293 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:31:21.460367 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:31:21.486848 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:21.486866 3121455 cri.go:89] found id: ""
	I1217 11:31:21.486874 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:31:21.486936 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:21.490731 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:31:21.490806 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:31:21.518952 3121455 cri.go:89] found id: ""
	I1217 11:31:21.518978 3121455 logs.go:282] 0 containers: []
	W1217 11:31:21.518986 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:31:21.518992 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:31:21.519051 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:31:21.549086 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:21.549160 3121455 cri.go:89] found id: ""
	I1217 11:31:21.549191 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:31:21.549282 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:21.552912 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:31:21.553062 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:31:21.578608 3121455 cri.go:89] found id: ""
	I1217 11:31:21.578633 3121455 logs.go:282] 0 containers: []
	W1217 11:31:21.578642 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:31:21.578649 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:31:21.578711 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:31:21.603762 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:21.603785 3121455 cri.go:89] found id: ""
	I1217 11:31:21.603793 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:31:21.603849 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:21.608102 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:31:21.608278 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:31:21.636217 3121455 cri.go:89] found id: ""
	I1217 11:31:21.636280 3121455 logs.go:282] 0 containers: []
	W1217 11:31:21.636302 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:31:21.636322 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:31:21.636408 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:31:21.662307 3121455 cri.go:89] found id: ""
	I1217 11:31:21.662338 3121455 logs.go:282] 0 containers: []
	W1217 11:31:21.662348 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:31:21.662361 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:31:21.662374 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:31:21.721990 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:31:21.722027 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:21.759419 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:31:21.759494 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:21.802315 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:31:21.802394 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:31:21.840652 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:31:21.840690 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:31:21.871572 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:31:21.871603 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:31:21.888295 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:31:21.888324 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:31:21.956039 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:31:21.956071 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:31:21.956104 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:21.988870 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:31:21.988903 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:24.524982 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:31:24.535645 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:31:24.535728 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:31:24.562897 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:24.562921 3121455 cri.go:89] found id: ""
	I1217 11:31:24.562931 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:31:24.562989 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:24.566924 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:31:24.567024 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:31:24.593937 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:24.593967 3121455 cri.go:89] found id: ""
	I1217 11:31:24.593976 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:31:24.594058 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:24.597853 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:31:24.597933 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:31:24.623404 3121455 cri.go:89] found id: ""
	I1217 11:31:24.623430 3121455 logs.go:282] 0 containers: []
	W1217 11:31:24.623439 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:31:24.623445 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:31:24.623505 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:31:24.649489 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:24.649509 3121455 cri.go:89] found id: ""
	I1217 11:31:24.649517 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:31:24.649572 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:24.653349 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:31:24.653424 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:31:24.678916 3121455 cri.go:89] found id: ""
	I1217 11:31:24.678941 3121455 logs.go:282] 0 containers: []
	W1217 11:31:24.678950 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:31:24.678957 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:31:24.679016 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:31:24.705296 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:24.705320 3121455 cri.go:89] found id: ""
	I1217 11:31:24.705329 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:31:24.705387 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:24.709150 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:31:24.709225 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:31:24.735599 3121455 cri.go:89] found id: ""
	I1217 11:31:24.735625 3121455 logs.go:282] 0 containers: []
	W1217 11:31:24.735635 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:31:24.735642 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:31:24.735701 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:31:24.768160 3121455 cri.go:89] found id: ""
	I1217 11:31:24.768189 3121455 logs.go:282] 0 containers: []
	W1217 11:31:24.768198 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:31:24.768211 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:31:24.768225 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:31:24.834224 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:31:24.834268 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:31:24.851571 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:31:24.851652 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:31:24.921224 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:31:24.921244 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:31:24.921257 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:24.956451 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:31:24.956483 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:24.994825 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:31:24.994862 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:25.035718 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:31:25.035753 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:25.079546 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:31:25.079577 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:31:25.108933 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:31:25.108966 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:31:27.639085 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:31:27.649584 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:31:27.649665 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:31:27.679598 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:27.679619 3121455 cri.go:89] found id: ""
	I1217 11:31:27.679627 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:31:27.679701 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:27.683546 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:31:27.683622 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:31:27.711607 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:27.711631 3121455 cri.go:89] found id: ""
	I1217 11:31:27.711641 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:31:27.711700 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:27.715576 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:31:27.715651 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:31:27.740551 3121455 cri.go:89] found id: ""
	I1217 11:31:27.740578 3121455 logs.go:282] 0 containers: []
	W1217 11:31:27.740587 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:31:27.740593 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:31:27.740666 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:31:27.783822 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:27.783842 3121455 cri.go:89] found id: ""
	I1217 11:31:27.783852 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:31:27.783914 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:27.788364 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:31:27.788483 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:31:27.818184 3121455 cri.go:89] found id: ""
	I1217 11:31:27.818210 3121455 logs.go:282] 0 containers: []
	W1217 11:31:27.818219 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:31:27.818225 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:31:27.818284 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:31:27.848496 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:27.848521 3121455 cri.go:89] found id: ""
	I1217 11:31:27.848530 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:31:27.848596 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:27.854085 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:31:27.854234 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:31:27.880734 3121455 cri.go:89] found id: ""
	I1217 11:31:27.880803 3121455 logs.go:282] 0 containers: []
	W1217 11:31:27.880829 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:31:27.880842 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:31:27.880914 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:31:27.906810 3121455 cri.go:89] found id: ""
	I1217 11:31:27.906835 3121455 logs.go:282] 0 containers: []
	W1217 11:31:27.906846 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:31:27.906861 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:31:27.906873 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:31:27.937536 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:31:27.937562 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:27.971267 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:31:27.971301 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:31:28.030652 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:31:28.030687 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:31:28.047770 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:31:28.047804 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:31:28.129498 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:31:28.129519 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:31:28.129532 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:28.164129 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:31:28.164171 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:28.197737 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:31:28.197769 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:28.233509 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:31:28.233544 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:31:30.768622 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:31:30.780487 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:31:30.780657 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:31:30.820664 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:30.820684 3121455 cri.go:89] found id: ""
	I1217 11:31:30.820693 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:31:30.820769 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:30.824489 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:31:30.824569 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:31:30.861572 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:30.861596 3121455 cri.go:89] found id: ""
	I1217 11:31:30.861605 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:31:30.861660 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:30.865361 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:31:30.865438 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:31:30.895846 3121455 cri.go:89] found id: ""
	I1217 11:31:30.895877 3121455 logs.go:282] 0 containers: []
	W1217 11:31:30.895887 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:31:30.895894 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:31:30.895961 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:31:30.922544 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:30.922569 3121455 cri.go:89] found id: ""
	I1217 11:31:30.922578 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:31:30.922646 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:30.926855 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:31:30.926927 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:31:30.953033 3121455 cri.go:89] found id: ""
	I1217 11:31:30.953108 3121455 logs.go:282] 0 containers: []
	W1217 11:31:30.953144 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:31:30.953180 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:31:30.953279 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:31:30.994543 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:30.994608 3121455 cri.go:89] found id: ""
	I1217 11:31:30.994630 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:31:30.994714 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:31.000225 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:31:31.000362 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:31:31.034564 3121455 cri.go:89] found id: ""
	I1217 11:31:31.034641 3121455 logs.go:282] 0 containers: []
	W1217 11:31:31.034665 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:31:31.034687 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:31:31.034784 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:31:31.079570 3121455 cri.go:89] found id: ""
	I1217 11:31:31.079605 3121455 logs.go:282] 0 containers: []
	W1217 11:31:31.079615 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:31:31.079630 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:31:31.079643 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:31:31.160599 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:31:31.160643 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:31:31.160657 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:31.213700 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:31:31.213776 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:31:31.246319 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:31:31.246632 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:31:31.305700 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:31:31.305769 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:31:31.379055 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:31:31.379097 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:31:31.398969 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:31:31.399047 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:31.448295 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:31:31.448328 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:31.487326 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:31:31.487419 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:34.088759 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:31:34.103939 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:31:34.104016 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:31:34.132572 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:34.132592 3121455 cri.go:89] found id: ""
	I1217 11:31:34.132600 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:31:34.132681 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:34.136712 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:31:34.136790 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:31:34.162787 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:34.162810 3121455 cri.go:89] found id: ""
	I1217 11:31:34.162819 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:31:34.162876 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:34.166851 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:31:34.166932 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:31:34.197736 3121455 cri.go:89] found id: ""
	I1217 11:31:34.197766 3121455 logs.go:282] 0 containers: []
	W1217 11:31:34.197776 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:31:34.197782 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:31:34.197871 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:31:34.223582 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:34.223603 3121455 cri.go:89] found id: ""
	I1217 11:31:34.223612 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:31:34.223700 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:34.227678 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:31:34.227790 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:31:34.254053 3121455 cri.go:89] found id: ""
	I1217 11:31:34.254081 3121455 logs.go:282] 0 containers: []
	W1217 11:31:34.254102 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:31:34.254127 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:31:34.254203 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:31:34.284081 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:34.284103 3121455 cri.go:89] found id: ""
	I1217 11:31:34.284111 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:31:34.284170 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:34.289040 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:31:34.289115 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:31:34.314089 3121455 cri.go:89] found id: ""
	I1217 11:31:34.314116 3121455 logs.go:282] 0 containers: []
	W1217 11:31:34.314125 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:31:34.314132 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:31:34.314193 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:31:34.340930 3121455 cri.go:89] found id: ""
	I1217 11:31:34.340957 3121455 logs.go:282] 0 containers: []
	W1217 11:31:34.340966 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:31:34.340983 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:31:34.340998 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:31:34.400078 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:31:34.400121 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:31:34.471218 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:31:34.471244 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:31:34.471256 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:34.510435 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:31:34.510472 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:31:34.545893 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:31:34.546008 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:31:34.588373 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:31:34.588483 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:31:34.606276 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:31:34.606305 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:34.641404 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:31:34.641437 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:34.675164 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:31:34.675195 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:37.235696 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:31:37.246372 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:31:37.246442 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:31:37.272275 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:37.272299 3121455 cri.go:89] found id: ""
	I1217 11:31:37.272313 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:31:37.272369 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:37.276006 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:31:37.276079 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:31:37.301579 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:37.301598 3121455 cri.go:89] found id: ""
	I1217 11:31:37.301605 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:31:37.301659 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:37.305346 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:31:37.305420 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:31:37.334545 3121455 cri.go:89] found id: ""
	I1217 11:31:37.334570 3121455 logs.go:282] 0 containers: []
	W1217 11:31:37.334579 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:31:37.334586 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:31:37.334673 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:31:37.359589 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:37.359613 3121455 cri.go:89] found id: ""
	I1217 11:31:37.359620 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:31:37.359677 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:37.363287 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:31:37.363359 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:31:37.387935 3121455 cri.go:89] found id: ""
	I1217 11:31:37.387957 3121455 logs.go:282] 0 containers: []
	W1217 11:31:37.387966 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:31:37.387975 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:31:37.388033 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:31:37.413954 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:37.413979 3121455 cri.go:89] found id: ""
	I1217 11:31:37.413988 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:31:37.414045 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:37.417868 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:31:37.417943 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:31:37.443932 3121455 cri.go:89] found id: ""
	I1217 11:31:37.443960 3121455 logs.go:282] 0 containers: []
	W1217 11:31:37.443970 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:31:37.443976 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:31:37.444042 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:31:37.474400 3121455 cri.go:89] found id: ""
	I1217 11:31:37.474477 3121455 logs.go:282] 0 containers: []
	W1217 11:31:37.474500 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:31:37.474539 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:31:37.474563 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:31:37.528474 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:31:37.528509 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:31:37.550816 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:31:37.550847 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:31:37.621242 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:31:37.621309 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:31:37.621330 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:37.658698 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:31:37.658736 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:37.708194 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:31:37.708226 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:31:37.738414 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:31:37.738452 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:31:37.798481 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:31:37.798515 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:37.831713 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:31:37.831745 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:40.376819 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:31:40.387279 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:31:40.387351 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:31:40.418870 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:40.418896 3121455 cri.go:89] found id: ""
	I1217 11:31:40.418904 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:31:40.418965 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:40.422848 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:31:40.422925 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:31:40.447733 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:40.447759 3121455 cri.go:89] found id: ""
	I1217 11:31:40.447767 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:31:40.447829 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:40.451688 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:31:40.451817 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:31:40.481835 3121455 cri.go:89] found id: ""
	I1217 11:31:40.481914 3121455 logs.go:282] 0 containers: []
	W1217 11:31:40.481939 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:31:40.481955 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:31:40.482047 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:31:40.523336 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:40.523360 3121455 cri.go:89] found id: ""
	I1217 11:31:40.523368 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:31:40.523445 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:40.530092 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:31:40.530271 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:31:40.570620 3121455 cri.go:89] found id: ""
	I1217 11:31:40.570692 3121455 logs.go:282] 0 containers: []
	W1217 11:31:40.570717 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:31:40.570744 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:31:40.570842 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:31:40.602523 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:40.602543 3121455 cri.go:89] found id: ""
	I1217 11:31:40.602551 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:31:40.602605 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:40.606474 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:31:40.606552 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:31:40.633107 3121455 cri.go:89] found id: ""
	I1217 11:31:40.633131 3121455 logs.go:282] 0 containers: []
	W1217 11:31:40.633140 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:31:40.633146 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:31:40.633213 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:31:40.662268 3121455 cri.go:89] found id: ""
	I1217 11:31:40.662290 3121455 logs.go:282] 0 containers: []
	W1217 11:31:40.662299 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:31:40.662312 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:31:40.662323 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:31:40.720691 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:31:40.720730 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:40.758847 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:31:40.758882 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:40.797151 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:31:40.797185 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:40.841060 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:31:40.841097 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:31:40.871619 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:31:40.871656 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:31:40.888160 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:31:40.888192 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:31:40.956636 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:31:40.956662 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:31:40.956676 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:41.005020 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:31:41.005054 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:31:43.536983 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:31:43.548977 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:31:43.549049 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:31:43.580135 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:43.580154 3121455 cri.go:89] found id: ""
	I1217 11:31:43.580162 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:31:43.580220 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:43.584058 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:31:43.584135 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:31:43.618449 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:43.618486 3121455 cri.go:89] found id: ""
	I1217 11:31:43.618495 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:31:43.618557 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:43.622422 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:31:43.622545 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:31:43.652458 3121455 cri.go:89] found id: ""
	I1217 11:31:43.652484 3121455 logs.go:282] 0 containers: []
	W1217 11:31:43.652493 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:31:43.652500 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:31:43.652562 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:31:43.678954 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:43.678977 3121455 cri.go:89] found id: ""
	I1217 11:31:43.678985 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:31:43.679066 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:43.682746 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:31:43.682849 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:31:43.712758 3121455 cri.go:89] found id: ""
	I1217 11:31:43.712791 3121455 logs.go:282] 0 containers: []
	W1217 11:31:43.712800 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:31:43.712807 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:31:43.712869 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:31:43.743197 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:43.743221 3121455 cri.go:89] found id: ""
	I1217 11:31:43.743229 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:31:43.743314 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:43.747081 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:31:43.747181 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:31:43.772209 3121455 cri.go:89] found id: ""
	I1217 11:31:43.772234 3121455 logs.go:282] 0 containers: []
	W1217 11:31:43.772243 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:31:43.772250 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:31:43.772357 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:31:43.799342 3121455 cri.go:89] found id: ""
	I1217 11:31:43.799370 3121455 logs.go:282] 0 containers: []
	W1217 11:31:43.799378 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:31:43.799392 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:31:43.799404 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:43.834795 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:31:43.834829 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:31:43.866194 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:31:43.866229 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:31:43.925476 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:31:43.925510 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:31:43.995698 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:31:43.995718 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:31:43.995731 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:44.030451 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:31:44.030482 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:31:44.063277 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:31:44.063304 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:31:44.079433 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:31:44.079463 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:44.115745 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:31:44.115777 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:46.650163 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:31:46.660205 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:31:46.660319 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:31:46.686432 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:46.686452 3121455 cri.go:89] found id: ""
	I1217 11:31:46.686460 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:31:46.686524 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:46.690424 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:31:46.690501 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:31:46.715738 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:46.715761 3121455 cri.go:89] found id: ""
	I1217 11:31:46.715770 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:31:46.715826 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:46.719655 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:31:46.719731 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:31:46.745279 3121455 cri.go:89] found id: ""
	I1217 11:31:46.745302 3121455 logs.go:282] 0 containers: []
	W1217 11:31:46.745310 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:31:46.745322 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:31:46.745382 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:31:46.778387 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:46.778406 3121455 cri.go:89] found id: ""
	I1217 11:31:46.778414 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:31:46.778471 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:46.783557 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:31:46.783685 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:31:46.825003 3121455 cri.go:89] found id: ""
	I1217 11:31:46.825035 3121455 logs.go:282] 0 containers: []
	W1217 11:31:46.825045 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:31:46.825072 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:31:46.825153 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:31:46.859474 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:46.859549 3121455 cri.go:89] found id: ""
	I1217 11:31:46.859572 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:31:46.859642 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:46.863289 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:31:46.863399 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:31:46.889131 3121455 cri.go:89] found id: ""
	I1217 11:31:46.889156 3121455 logs.go:282] 0 containers: []
	W1217 11:31:46.889166 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:31:46.889172 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:31:46.889232 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:31:46.913518 3121455 cri.go:89] found id: ""
	I1217 11:31:46.913543 3121455 logs.go:282] 0 containers: []
	W1217 11:31:46.913555 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:31:46.913568 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:31:46.913579 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:31:46.974001 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:31:46.974036 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:31:47.045305 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:31:47.045327 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:31:47.045342 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:47.080187 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:31:47.080218 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:47.117147 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:31:47.117189 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:31:47.148339 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:31:47.148378 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:31:47.176872 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:31:47.176901 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:31:47.193530 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:31:47.193562 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:47.226350 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:31:47.226381 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:49.759112 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:31:49.770176 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:31:49.770243 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:31:49.805807 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:49.805826 3121455 cri.go:89] found id: ""
	I1217 11:31:49.805833 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:31:49.805891 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:49.810037 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:31:49.810106 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:31:49.841320 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:49.841342 3121455 cri.go:89] found id: ""
	I1217 11:31:49.841350 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:31:49.841405 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:49.845305 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:31:49.845375 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:31:49.874164 3121455 cri.go:89] found id: ""
	I1217 11:31:49.874186 3121455 logs.go:282] 0 containers: []
	W1217 11:31:49.874194 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:31:49.874200 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:31:49.874260 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:31:49.900723 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:49.900747 3121455 cri.go:89] found id: ""
	I1217 11:31:49.900756 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:31:49.900816 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:49.904515 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:31:49.904593 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:31:49.930544 3121455 cri.go:89] found id: ""
	I1217 11:31:49.930620 3121455 logs.go:282] 0 containers: []
	W1217 11:31:49.930642 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:31:49.930658 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:31:49.930731 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:31:49.960527 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:49.960552 3121455 cri.go:89] found id: ""
	I1217 11:31:49.960561 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:31:49.960635 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:49.964312 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:31:49.964468 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:31:49.995746 3121455 cri.go:89] found id: ""
	I1217 11:31:49.995775 3121455 logs.go:282] 0 containers: []
	W1217 11:31:49.995784 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:31:49.995791 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:31:49.995848 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:31:50.026677 3121455 cri.go:89] found id: ""
	I1217 11:31:50.026702 3121455 logs.go:282] 0 containers: []
	W1217 11:31:50.026712 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:31:50.026729 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:31:50.026742 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:31:50.044162 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:31:50.044196 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:50.079829 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:31:50.079862 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:31:50.108863 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:31:50.108898 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:31:50.157816 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:31:50.157842 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:31:50.218722 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:31:50.218757 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:31:50.290923 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:31:50.290956 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:31:50.290969 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:50.329592 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:31:50.329629 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:50.370725 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:31:50.370757 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:52.903919 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:31:52.919564 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:31:52.919678 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:31:52.973229 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:52.973254 3121455 cri.go:89] found id: ""
	I1217 11:31:52.973262 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:31:52.973332 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:52.977741 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:31:52.977826 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:31:53.011263 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:53.011305 3121455 cri.go:89] found id: ""
	I1217 11:31:53.011314 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:31:53.011396 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:53.016032 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:31:53.016139 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:31:53.045899 3121455 cri.go:89] found id: ""
	I1217 11:31:53.045958 3121455 logs.go:282] 0 containers: []
	W1217 11:31:53.045975 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:31:53.045983 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:31:53.046063 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:31:53.082586 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:53.082610 3121455 cri.go:89] found id: ""
	I1217 11:31:53.082618 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:31:53.082740 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:53.088692 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:31:53.088836 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:31:53.141507 3121455 cri.go:89] found id: ""
	I1217 11:31:53.141547 3121455 logs.go:282] 0 containers: []
	W1217 11:31:53.141575 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:31:53.141588 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:31:53.141665 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:31:53.172538 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:53.172560 3121455 cri.go:89] found id: ""
	I1217 11:31:53.172568 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:31:53.172660 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:53.176459 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:31:53.176530 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:31:53.210851 3121455 cri.go:89] found id: ""
	I1217 11:31:53.210873 3121455 logs.go:282] 0 containers: []
	W1217 11:31:53.210882 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:31:53.210888 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:31:53.210944 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:31:53.246405 3121455 cri.go:89] found id: ""
	I1217 11:31:53.246483 3121455 logs.go:282] 0 containers: []
	W1217 11:31:53.246505 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:31:53.246532 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:31:53.246572 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:31:53.263012 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:31:53.263089 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:53.301132 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:31:53.301211 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:53.357243 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:31:53.357321 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:53.409216 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:31:53.409259 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:31:53.447154 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:31:53.447189 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:31:53.569542 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:31:53.569565 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:31:53.569578 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:53.660392 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:31:53.662892 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:31:53.696270 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:31:53.696309 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:31:56.263498 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:31:56.274477 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:31:56.274552 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:31:56.301480 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:56.301502 3121455 cri.go:89] found id: ""
	I1217 11:31:56.301511 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:31:56.301570 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:56.305613 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:31:56.305686 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:31:56.335540 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:56.335562 3121455 cri.go:89] found id: ""
	I1217 11:31:56.335570 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:31:56.335623 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:56.339347 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:31:56.339426 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:31:56.364313 3121455 cri.go:89] found id: ""
	I1217 11:31:56.364340 3121455 logs.go:282] 0 containers: []
	W1217 11:31:56.364350 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:31:56.364356 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:31:56.364449 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:31:56.392313 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:56.392335 3121455 cri.go:89] found id: ""
	I1217 11:31:56.392343 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:31:56.392406 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:56.396277 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:31:56.396378 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:31:56.420844 3121455 cri.go:89] found id: ""
	I1217 11:31:56.420873 3121455 logs.go:282] 0 containers: []
	W1217 11:31:56.420883 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:31:56.420889 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:31:56.420978 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:31:56.446598 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:56.446667 3121455 cri.go:89] found id: ""
	I1217 11:31:56.446690 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:31:56.446771 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:56.450696 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:31:56.450775 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:31:56.476598 3121455 cri.go:89] found id: ""
	I1217 11:31:56.476622 3121455 logs.go:282] 0 containers: []
	W1217 11:31:56.476632 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:31:56.476638 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:31:56.476699 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:31:56.506733 3121455 cri.go:89] found id: ""
	I1217 11:31:56.506759 3121455 logs.go:282] 0 containers: []
	W1217 11:31:56.506768 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:31:56.506781 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:31:56.506792 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:31:56.539880 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:31:56.539919 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:31:56.605696 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:31:56.605784 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:31:56.624954 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:31:56.624988 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:31:56.699502 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:31:56.699529 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:31:56.699545 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:56.736750 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:31:56.736784 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:31:56.766794 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:31:56.766823 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:56.800790 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:31:56.800823 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:56.832941 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:31:56.832974 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:59.366815 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:31:59.376970 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:31:59.377065 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:31:59.402792 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:59.402815 3121455 cri.go:89] found id: ""
	I1217 11:31:59.402824 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:31:59.402878 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:59.406731 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:31:59.406807 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:31:59.432391 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:59.432448 3121455 cri.go:89] found id: ""
	I1217 11:31:59.432458 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:31:59.432519 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:59.436105 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:31:59.436182 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:31:59.460774 3121455 cri.go:89] found id: ""
	I1217 11:31:59.460800 3121455 logs.go:282] 0 containers: []
	W1217 11:31:59.460808 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:31:59.460815 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:31:59.460872 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:31:59.486234 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:59.486257 3121455 cri.go:89] found id: ""
	I1217 11:31:59.486265 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:31:59.486321 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:59.490065 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:31:59.490142 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:31:59.542547 3121455 cri.go:89] found id: ""
	I1217 11:31:59.542572 3121455 logs.go:282] 0 containers: []
	W1217 11:31:59.542581 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:31:59.542587 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:31:59.542647 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:31:59.575477 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:59.575501 3121455 cri.go:89] found id: ""
	I1217 11:31:59.575509 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:31:59.575565 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:31:59.579247 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:31:59.579319 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:31:59.613803 3121455 cri.go:89] found id: ""
	I1217 11:31:59.613829 3121455 logs.go:282] 0 containers: []
	W1217 11:31:59.613837 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:31:59.613843 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:31:59.613902 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:31:59.655424 3121455 cri.go:89] found id: ""
	I1217 11:31:59.655449 3121455 logs.go:282] 0 containers: []
	W1217 11:31:59.655458 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:31:59.655473 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:31:59.655486 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:31:59.719142 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:31:59.719177 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:31:59.789688 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:31:59.789750 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:31:59.789770 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:31:59.827291 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:31:59.827321 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:31:59.861510 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:31:59.861545 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:31:59.895138 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:31:59.895174 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:31:59.914224 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:31:59.914255 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:31:59.951590 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:31:59.951628 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:31:59.980742 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:31:59.980776 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:32:02.593410 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:32:02.614323 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:32:02.614414 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:32:02.664502 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:02.664528 3121455 cri.go:89] found id: ""
	I1217 11:32:02.664536 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:32:02.664601 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:02.668629 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:32:02.668715 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:32:02.695003 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:02.695029 3121455 cri.go:89] found id: ""
	I1217 11:32:02.695038 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:32:02.695116 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:02.698978 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:32:02.699057 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:32:02.733798 3121455 cri.go:89] found id: ""
	I1217 11:32:02.733825 3121455 logs.go:282] 0 containers: []
	W1217 11:32:02.733834 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:32:02.733841 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:32:02.733900 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:32:02.777703 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:02.777727 3121455 cri.go:89] found id: ""
	I1217 11:32:02.777735 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:32:02.777794 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:02.784764 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:32:02.784868 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:32:02.838715 3121455 cri.go:89] found id: ""
	I1217 11:32:02.838741 3121455 logs.go:282] 0 containers: []
	W1217 11:32:02.838750 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:32:02.838756 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:32:02.838815 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:32:02.884978 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:02.884998 3121455 cri.go:89] found id: ""
	I1217 11:32:02.885006 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:32:02.885127 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:02.888956 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:32:02.889040 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:32:02.915744 3121455 cri.go:89] found id: ""
	I1217 11:32:02.915770 3121455 logs.go:282] 0 containers: []
	W1217 11:32:02.915778 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:32:02.915784 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:32:02.915850 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:32:02.941548 3121455 cri.go:89] found id: ""
	I1217 11:32:02.941575 3121455 logs.go:282] 0 containers: []
	W1217 11:32:02.941584 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:32:02.941600 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:32:02.941620 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:02.980171 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:32:02.980206 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:03.035667 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:32:03.035707 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:32:03.066358 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:32:03.066393 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:32:03.119795 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:32:03.119828 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:03.158032 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:32:03.158067 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:03.206999 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:32:03.207034 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:32:03.272898 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:32:03.272936 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:32:03.291602 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:32:03.291646 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:32:03.380087 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:32:05.880556 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:32:05.890444 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:32:05.890518 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:32:05.916394 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:05.916443 3121455 cri.go:89] found id: ""
	I1217 11:32:05.916453 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:32:05.916510 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:05.920053 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:32:05.920125 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:32:05.949285 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:05.949310 3121455 cri.go:89] found id: ""
	I1217 11:32:05.949318 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:32:05.949372 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:05.953260 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:32:05.953335 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:32:05.979025 3121455 cri.go:89] found id: ""
	I1217 11:32:05.979052 3121455 logs.go:282] 0 containers: []
	W1217 11:32:05.979061 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:32:05.979068 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:32:05.979152 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:32:06.016159 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:06.016183 3121455 cri.go:89] found id: ""
	I1217 11:32:06.016193 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:32:06.016254 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:06.020213 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:32:06.020296 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:32:06.052256 3121455 cri.go:89] found id: ""
	I1217 11:32:06.052279 3121455 logs.go:282] 0 containers: []
	W1217 11:32:06.052288 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:32:06.052295 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:32:06.052358 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:32:06.079335 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:06.079356 3121455 cri.go:89] found id: ""
	I1217 11:32:06.079363 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:32:06.079510 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:06.083425 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:32:06.083511 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:32:06.109339 3121455 cri.go:89] found id: ""
	I1217 11:32:06.109368 3121455 logs.go:282] 0 containers: []
	W1217 11:32:06.109380 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:32:06.109388 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:32:06.109447 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:32:06.134099 3121455 cri.go:89] found id: ""
	I1217 11:32:06.134122 3121455 logs.go:282] 0 containers: []
	W1217 11:32:06.134130 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:32:06.134145 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:32:06.134159 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:32:06.205688 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:32:06.205712 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:32:06.205725 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:06.240073 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:32:06.240102 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:06.284719 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:32:06.284749 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:32:06.314539 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:32:06.314574 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:32:06.373914 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:32:06.373953 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:32:06.398930 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:32:06.398959 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:06.430534 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:32:06.430565 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:06.464655 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:32:06.464691 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:32:08.995927 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:32:09.007744 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:32:09.007827 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:32:09.033126 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:09.033146 3121455 cri.go:89] found id: ""
	I1217 11:32:09.033154 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:32:09.033219 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:09.037048 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:32:09.037122 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:32:09.062479 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:09.062503 3121455 cri.go:89] found id: ""
	I1217 11:32:09.062511 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:32:09.062570 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:09.066232 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:32:09.066301 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:32:09.090350 3121455 cri.go:89] found id: ""
	I1217 11:32:09.090372 3121455 logs.go:282] 0 containers: []
	W1217 11:32:09.090380 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:32:09.090386 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:32:09.090447 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:32:09.114996 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:09.115016 3121455 cri.go:89] found id: ""
	I1217 11:32:09.115024 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:32:09.115078 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:09.118699 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:32:09.118771 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:32:09.147316 3121455 cri.go:89] found id: ""
	I1217 11:32:09.147341 3121455 logs.go:282] 0 containers: []
	W1217 11:32:09.147350 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:32:09.147357 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:32:09.147416 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:32:09.173666 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:09.173689 3121455 cri.go:89] found id: ""
	I1217 11:32:09.173697 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:32:09.173754 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:09.177565 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:32:09.177638 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:32:09.207099 3121455 cri.go:89] found id: ""
	I1217 11:32:09.207124 3121455 logs.go:282] 0 containers: []
	W1217 11:32:09.207134 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:32:09.207140 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:32:09.207202 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:32:09.233340 3121455 cri.go:89] found id: ""
	I1217 11:32:09.233366 3121455 logs.go:282] 0 containers: []
	W1217 11:32:09.233375 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:32:09.233389 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:32:09.233400 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:32:09.292244 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:32:09.292329 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:32:09.358158 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:32:09.358180 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:32:09.358194 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:09.393187 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:32:09.393220 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:09.425350 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:32:09.425384 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:09.465886 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:32:09.465916 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:32:09.494673 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:32:09.494705 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:32:09.517123 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:32:09.517154 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:09.570335 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:32:09.570367 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:32:12.106577 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:32:12.116828 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:32:12.116910 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:32:12.142633 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:12.142659 3121455 cri.go:89] found id: ""
	I1217 11:32:12.142668 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:32:12.142723 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:12.146467 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:32:12.146542 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:32:12.175610 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:12.175634 3121455 cri.go:89] found id: ""
	I1217 11:32:12.175642 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:32:12.175701 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:12.179386 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:32:12.179461 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:32:12.205390 3121455 cri.go:89] found id: ""
	I1217 11:32:12.205415 3121455 logs.go:282] 0 containers: []
	W1217 11:32:12.205424 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:32:12.205430 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:32:12.205489 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:32:12.230428 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:12.230451 3121455 cri.go:89] found id: ""
	I1217 11:32:12.230461 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:32:12.230522 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:12.234395 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:32:12.234467 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:32:12.258518 3121455 cri.go:89] found id: ""
	I1217 11:32:12.258542 3121455 logs.go:282] 0 containers: []
	W1217 11:32:12.258550 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:32:12.258557 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:32:12.258614 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:32:12.287973 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:12.287998 3121455 cri.go:89] found id: ""
	I1217 11:32:12.288007 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:32:12.288062 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:12.291865 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:32:12.291938 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:32:12.319253 3121455 cri.go:89] found id: ""
	I1217 11:32:12.319296 3121455 logs.go:282] 0 containers: []
	W1217 11:32:12.319314 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:32:12.319324 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:32:12.319414 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:32:12.350253 3121455 cri.go:89] found id: ""
	I1217 11:32:12.350276 3121455 logs.go:282] 0 containers: []
	W1217 11:32:12.350284 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:32:12.350300 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:32:12.350311 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:32:12.366406 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:32:12.366434 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:12.399903 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:32:12.399934 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:12.432670 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:32:12.432707 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:12.469119 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:32:12.469153 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:32:12.528140 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:32:12.528216 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:32:12.603024 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:32:12.603067 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:32:12.678962 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:32:12.678983 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:32:12.678999 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:12.728155 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:32:12.728184 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:32:15.259282 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:32:15.270103 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:32:15.270178 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:32:15.296386 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:15.296410 3121455 cri.go:89] found id: ""
	I1217 11:32:15.296464 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:32:15.296530 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:15.300405 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:32:15.300518 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:32:15.327682 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:15.327705 3121455 cri.go:89] found id: ""
	I1217 11:32:15.327714 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:32:15.327771 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:15.331449 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:32:15.331525 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:32:15.357583 3121455 cri.go:89] found id: ""
	I1217 11:32:15.357610 3121455 logs.go:282] 0 containers: []
	W1217 11:32:15.357622 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:32:15.357629 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:32:15.357690 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:32:15.384601 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:15.384668 3121455 cri.go:89] found id: ""
	I1217 11:32:15.384690 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:32:15.384777 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:15.388557 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:32:15.388646 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:32:15.414333 3121455 cri.go:89] found id: ""
	I1217 11:32:15.414360 3121455 logs.go:282] 0 containers: []
	W1217 11:32:15.414368 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:32:15.414375 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:32:15.414438 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:32:15.440683 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:15.440709 3121455 cri.go:89] found id: ""
	I1217 11:32:15.440718 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:32:15.440775 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:15.444836 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:32:15.444912 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:32:15.470490 3121455 cri.go:89] found id: ""
	I1217 11:32:15.470517 3121455 logs.go:282] 0 containers: []
	W1217 11:32:15.470527 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:32:15.470533 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:32:15.470595 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:32:15.502495 3121455 cri.go:89] found id: ""
	I1217 11:32:15.502523 3121455 logs.go:282] 0 containers: []
	W1217 11:32:15.502533 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:32:15.502574 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:32:15.502595 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:32:15.557931 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:32:15.558007 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:32:15.619957 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:32:15.619996 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:15.654340 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:32:15.654373 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:32:15.670610 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:32:15.670646 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:32:15.737439 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:32:15.737470 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:32:15.737511 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:15.772795 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:32:15.772828 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:15.814625 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:32:15.814656 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:15.849613 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:32:15.849646 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:32:18.379790 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:32:18.390580 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:32:18.390656 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:32:18.443255 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:18.443279 3121455 cri.go:89] found id: ""
	I1217 11:32:18.443287 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:32:18.443345 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:18.447559 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:32:18.447638 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:32:18.478645 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:18.478671 3121455 cri.go:89] found id: ""
	I1217 11:32:18.478679 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:32:18.478736 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:18.483358 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:32:18.483437 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:32:18.538269 3121455 cri.go:89] found id: ""
	I1217 11:32:18.538298 3121455 logs.go:282] 0 containers: []
	W1217 11:32:18.538307 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:32:18.538316 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:32:18.538377 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:32:18.604405 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:18.604450 3121455 cri.go:89] found id: ""
	I1217 11:32:18.604458 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:32:18.604513 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:18.618852 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:32:18.618959 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:32:18.648994 3121455 cri.go:89] found id: ""
	I1217 11:32:18.649016 3121455 logs.go:282] 0 containers: []
	W1217 11:32:18.649026 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:32:18.649032 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:32:18.649096 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:32:18.675810 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:18.675834 3121455 cri.go:89] found id: ""
	I1217 11:32:18.675844 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:32:18.675901 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:18.679672 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:32:18.679786 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:32:18.707107 3121455 cri.go:89] found id: ""
	I1217 11:32:18.707131 3121455 logs.go:282] 0 containers: []
	W1217 11:32:18.707140 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:32:18.707146 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:32:18.707206 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:32:18.738026 3121455 cri.go:89] found id: ""
	I1217 11:32:18.738056 3121455 logs.go:282] 0 containers: []
	W1217 11:32:18.738066 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:32:18.738093 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:32:18.738120 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:32:18.755599 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:32:18.755632 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:18.796717 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:32:18.796758 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:18.830810 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:32:18.830844 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:18.865702 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:32:18.865739 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:32:18.894469 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:32:18.894504 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:32:18.952493 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:32:18.952527 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:32:19.023005 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:32:19.023026 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:32:19.023040 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:19.059360 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:32:19.059396 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:32:21.594543 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:32:21.606378 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:32:21.606451 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:32:21.652995 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:21.653015 3121455 cri.go:89] found id: ""
	I1217 11:32:21.653023 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:32:21.653083 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:21.657065 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:32:21.657158 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:32:21.696912 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:21.696932 3121455 cri.go:89] found id: ""
	I1217 11:32:21.696939 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:32:21.696995 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:21.702951 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:32:21.703018 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:32:21.735035 3121455 cri.go:89] found id: ""
	I1217 11:32:21.735056 3121455 logs.go:282] 0 containers: []
	W1217 11:32:21.735064 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:32:21.735070 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:32:21.735137 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:32:21.768956 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:21.768975 3121455 cri.go:89] found id: ""
	I1217 11:32:21.768983 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:32:21.769039 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:21.773279 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:32:21.773413 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:32:21.809018 3121455 cri.go:89] found id: ""
	I1217 11:32:21.809039 3121455 logs.go:282] 0 containers: []
	W1217 11:32:21.809048 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:32:21.809055 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:32:21.809113 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:32:21.847235 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:21.847253 3121455 cri.go:89] found id: ""
	I1217 11:32:21.847262 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:32:21.847318 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:21.851769 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:32:21.851844 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:32:21.888997 3121455 cri.go:89] found id: ""
	I1217 11:32:21.889019 3121455 logs.go:282] 0 containers: []
	W1217 11:32:21.889028 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:32:21.889034 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:32:21.889097 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:32:21.925903 3121455 cri.go:89] found id: ""
	I1217 11:32:21.925924 3121455 logs.go:282] 0 containers: []
	W1217 11:32:21.925933 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:32:21.925952 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:32:21.925963 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:32:21.961255 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:32:21.961348 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:22.023068 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:32:22.023109 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:22.072771 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:32:22.080445 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:22.136249 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:32:22.136280 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:22.173333 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:32:22.173367 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:32:22.207362 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:32:22.207389 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:32:22.267555 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:32:22.267598 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:32:22.286673 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:32:22.286771 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:32:22.364305 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:32:24.864538 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:32:24.875376 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:32:24.875453 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:32:24.911518 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:24.911543 3121455 cri.go:89] found id: ""
	I1217 11:32:24.911552 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:32:24.911610 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:24.916112 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:32:24.916188 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:32:24.945106 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:24.945130 3121455 cri.go:89] found id: ""
	I1217 11:32:24.945139 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:32:24.945196 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:24.949069 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:32:24.949141 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:32:24.981831 3121455 cri.go:89] found id: ""
	I1217 11:32:24.981862 3121455 logs.go:282] 0 containers: []
	W1217 11:32:24.981872 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:32:24.981879 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:32:24.981961 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:32:25.032606 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:25.032639 3121455 cri.go:89] found id: ""
	I1217 11:32:25.032647 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:32:25.032703 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:25.036769 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:32:25.036845 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:32:25.071922 3121455 cri.go:89] found id: ""
	I1217 11:32:25.071955 3121455 logs.go:282] 0 containers: []
	W1217 11:32:25.071964 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:32:25.071971 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:32:25.072044 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:32:25.122802 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:25.122822 3121455 cri.go:89] found id: ""
	I1217 11:32:25.122830 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:32:25.122897 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:25.127214 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:32:25.127303 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:32:25.170864 3121455 cri.go:89] found id: ""
	I1217 11:32:25.170898 3121455 logs.go:282] 0 containers: []
	W1217 11:32:25.170907 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:32:25.170914 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:32:25.170988 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:32:25.198770 3121455 cri.go:89] found id: ""
	I1217 11:32:25.198804 3121455 logs.go:282] 0 containers: []
	W1217 11:32:25.198813 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:32:25.198831 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:32:25.198846 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:25.263055 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:32:25.263086 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:32:25.368675 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:32:25.368693 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:32:25.368706 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:25.424995 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:32:25.425031 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:25.472992 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:32:25.473023 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:25.518976 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:32:25.519014 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:32:25.552389 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:32:25.552439 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:32:25.611060 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:32:25.611088 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:32:25.682328 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:32:25.682367 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:32:28.203348 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:32:28.213744 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:32:28.213836 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:32:28.247748 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:28.247785 3121455 cri.go:89] found id: ""
	I1217 11:32:28.247793 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:32:28.247877 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:28.251821 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:32:28.251928 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:32:28.290709 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:28.290743 3121455 cri.go:89] found id: ""
	I1217 11:32:28.290753 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:32:28.290835 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:28.295599 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:32:28.295702 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:32:28.333248 3121455 cri.go:89] found id: ""
	I1217 11:32:28.333288 3121455 logs.go:282] 0 containers: []
	W1217 11:32:28.333320 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:32:28.333333 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:32:28.333439 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:32:28.366227 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:28.366261 3121455 cri.go:89] found id: ""
	I1217 11:32:28.366270 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:32:28.366350 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:28.376102 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:32:28.376201 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:32:28.414800 3121455 cri.go:89] found id: ""
	I1217 11:32:28.414837 3121455 logs.go:282] 0 containers: []
	W1217 11:32:28.414846 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:32:28.414853 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:32:28.414956 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:32:28.447734 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:28.447772 3121455 cri.go:89] found id: ""
	I1217 11:32:28.447780 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:32:28.447869 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:28.452485 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:32:28.452622 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:32:28.498905 3121455 cri.go:89] found id: ""
	I1217 11:32:28.498968 3121455 logs.go:282] 0 containers: []
	W1217 11:32:28.498992 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:32:28.499012 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:32:28.499099 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:32:28.553327 3121455 cri.go:89] found id: ""
	I1217 11:32:28.553363 3121455 logs.go:282] 0 containers: []
	W1217 11:32:28.553373 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:32:28.553404 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:32:28.553422 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:32:28.639585 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:32:28.639668 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:28.685763 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:32:28.685840 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:28.720255 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:32:28.720327 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:28.763655 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:32:28.763742 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:28.801678 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:32:28.801751 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:32:28.840811 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:32:28.840867 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:32:28.899580 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:32:28.899650 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:32:28.917003 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:32:28.917087 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:32:29.020505 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:32:31.520740 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:32:31.532464 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:32:31.532536 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:32:31.559981 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:31.560001 3121455 cri.go:89] found id: ""
	I1217 11:32:31.560008 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:32:31.560063 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:31.567144 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:32:31.567223 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:32:31.592455 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:31.592476 3121455 cri.go:89] found id: ""
	I1217 11:32:31.592483 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:32:31.592537 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:31.596372 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:32:31.596469 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:32:31.630214 3121455 cri.go:89] found id: ""
	I1217 11:32:31.630251 3121455 logs.go:282] 0 containers: []
	W1217 11:32:31.630262 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:32:31.630268 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:32:31.630340 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:32:31.655606 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:31.655671 3121455 cri.go:89] found id: ""
	I1217 11:32:31.655694 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:32:31.655781 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:31.659337 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:32:31.659447 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:32:31.684232 3121455 cri.go:89] found id: ""
	I1217 11:32:31.684257 3121455 logs.go:282] 0 containers: []
	W1217 11:32:31.684267 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:32:31.684274 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:32:31.684351 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:32:31.709925 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:31.709995 3121455 cri.go:89] found id: ""
	I1217 11:32:31.710010 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:32:31.710069 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:31.714090 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:32:31.714174 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:32:31.739660 3121455 cri.go:89] found id: ""
	I1217 11:32:31.739689 3121455 logs.go:282] 0 containers: []
	W1217 11:32:31.739700 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:32:31.739707 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:32:31.739773 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:32:31.765728 3121455 cri.go:89] found id: ""
	I1217 11:32:31.765806 3121455 logs.go:282] 0 containers: []
	W1217 11:32:31.765831 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:32:31.765854 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:32:31.765866 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:31.810427 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:32:31.810460 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:32:31.860449 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:32:31.860474 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:32:31.925459 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:32:31.925498 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:32:31.943793 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:32:31.943822 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:32:32.023789 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:32:32.023812 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:32:32.023840 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:32.063695 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:32:32.063732 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:32:32.097026 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:32:32.097065 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:32.146833 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:32:32.146866 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:34.686818 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:32:34.698626 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:32:34.698703 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:32:34.727164 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:34.727191 3121455 cri.go:89] found id: ""
	I1217 11:32:34.727199 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:32:34.727258 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:34.731182 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:32:34.731255 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:32:34.761973 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:34.761996 3121455 cri.go:89] found id: ""
	I1217 11:32:34.762005 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:32:34.762062 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:34.765954 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:32:34.766031 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:32:34.791929 3121455 cri.go:89] found id: ""
	I1217 11:32:34.791955 3121455 logs.go:282] 0 containers: []
	W1217 11:32:34.791965 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:32:34.791971 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:32:34.792030 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:32:34.824397 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:34.824441 3121455 cri.go:89] found id: ""
	I1217 11:32:34.824449 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:32:34.824505 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:34.828150 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:32:34.828225 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:32:34.854541 3121455 cri.go:89] found id: ""
	I1217 11:32:34.854563 3121455 logs.go:282] 0 containers: []
	W1217 11:32:34.854571 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:32:34.854578 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:32:34.854635 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:32:34.879556 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:34.879628 3121455 cri.go:89] found id: ""
	I1217 11:32:34.879652 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:32:34.879733 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:34.883308 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:32:34.883379 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:32:34.913545 3121455 cri.go:89] found id: ""
	I1217 11:32:34.913573 3121455 logs.go:282] 0 containers: []
	W1217 11:32:34.913582 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:32:34.913589 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:32:34.913651 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:32:34.938652 3121455 cri.go:89] found id: ""
	I1217 11:32:34.938676 3121455 logs.go:282] 0 containers: []
	W1217 11:32:34.938685 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:32:34.938697 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:32:34.938714 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:32:34.968303 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:32:34.968337 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:32:35.028380 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:32:35.028479 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:32:35.045765 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:32:35.045794 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:35.081108 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:32:35.081141 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:35.115086 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:32:35.115120 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:32:35.143533 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:32:35.143573 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:32:35.223326 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:32:35.223390 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:32:35.223419 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:35.278777 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:32:35.278810 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:37.829769 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:32:37.839963 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:32:37.840040 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:32:37.866472 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:37.866498 3121455 cri.go:89] found id: ""
	I1217 11:32:37.866506 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:32:37.866569 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:37.870318 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:32:37.870393 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:32:37.894909 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:37.894931 3121455 cri.go:89] found id: ""
	I1217 11:32:37.894940 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:32:37.894994 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:37.899163 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:32:37.899232 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:32:37.929023 3121455 cri.go:89] found id: ""
	I1217 11:32:37.929047 3121455 logs.go:282] 0 containers: []
	W1217 11:32:37.929056 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:32:37.929062 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:32:37.929125 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:32:37.958117 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:37.958138 3121455 cri.go:89] found id: ""
	I1217 11:32:37.958146 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:32:37.958200 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:37.961926 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:32:37.962004 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:32:37.991631 3121455 cri.go:89] found id: ""
	I1217 11:32:37.991652 3121455 logs.go:282] 0 containers: []
	W1217 11:32:37.991661 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:32:37.991669 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:32:37.991726 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:32:38.025055 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:38.025080 3121455 cri.go:89] found id: ""
	I1217 11:32:38.025089 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:32:38.025152 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:38.029170 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:32:38.029265 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:32:38.060938 3121455 cri.go:89] found id: ""
	I1217 11:32:38.060965 3121455 logs.go:282] 0 containers: []
	W1217 11:32:38.060975 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:32:38.060992 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:32:38.061089 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:32:38.092140 3121455 cri.go:89] found id: ""
	I1217 11:32:38.092165 3121455 logs.go:282] 0 containers: []
	W1217 11:32:38.092174 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:32:38.092188 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:32:38.092206 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:32:38.149870 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:32:38.149910 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:38.184853 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:32:38.184882 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:38.221397 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:32:38.221441 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:38.259624 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:32:38.259658 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:38.297997 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:32:38.298076 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:32:38.333661 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:32:38.333700 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:32:38.379122 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:32:38.379151 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:32:38.395010 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:32:38.395038 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:32:38.464861 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:32:40.965168 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:32:40.975199 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:32:40.975268 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:32:41.004655 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:41.004677 3121455 cri.go:89] found id: ""
	I1217 11:32:41.004686 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:32:41.004747 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:41.008548 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:32:41.008635 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:32:41.033565 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:41.033590 3121455 cri.go:89] found id: ""
	I1217 11:32:41.033598 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:32:41.033652 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:41.037366 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:32:41.037441 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:32:41.062773 3121455 cri.go:89] found id: ""
	I1217 11:32:41.062800 3121455 logs.go:282] 0 containers: []
	W1217 11:32:41.062809 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:32:41.062815 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:32:41.062953 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:32:41.087955 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:41.087981 3121455 cri.go:89] found id: ""
	I1217 11:32:41.087990 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:32:41.088048 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:41.091658 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:32:41.091738 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:32:41.125394 3121455 cri.go:89] found id: ""
	I1217 11:32:41.125417 3121455 logs.go:282] 0 containers: []
	W1217 11:32:41.125426 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:32:41.125432 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:32:41.125495 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:32:41.154363 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:41.154388 3121455 cri.go:89] found id: ""
	I1217 11:32:41.154397 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:32:41.154458 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:41.158218 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:32:41.158291 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:32:41.184103 3121455 cri.go:89] found id: ""
	I1217 11:32:41.184124 3121455 logs.go:282] 0 containers: []
	W1217 11:32:41.184132 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:32:41.184138 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:32:41.184197 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:32:41.209465 3121455 cri.go:89] found id: ""
	I1217 11:32:41.209543 3121455 logs.go:282] 0 containers: []
	W1217 11:32:41.209566 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:32:41.209589 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:32:41.209615 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:32:41.225907 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:32:41.225939 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:41.261498 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:32:41.261535 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:41.301353 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:32:41.301385 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:41.339882 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:32:41.339916 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:41.379141 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:32:41.379174 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:32:41.407359 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:32:41.407397 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:32:41.436861 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:32:41.436887 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:32:41.494969 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:32:41.495045 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:32:41.560201 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:32:44.060988 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:32:44.072298 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:32:44.072368 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:32:44.098631 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:44.098655 3121455 cri.go:89] found id: ""
	I1217 11:32:44.098663 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:32:44.098719 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:44.102948 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:32:44.103033 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:32:44.129038 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:44.129063 3121455 cri.go:89] found id: ""
	I1217 11:32:44.129071 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:32:44.129130 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:44.133504 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:32:44.133574 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:32:44.167415 3121455 cri.go:89] found id: ""
	I1217 11:32:44.167437 3121455 logs.go:282] 0 containers: []
	W1217 11:32:44.167447 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:32:44.167453 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:32:44.167512 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:32:44.193103 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:44.193125 3121455 cri.go:89] found id: ""
	I1217 11:32:44.193133 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:32:44.193191 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:44.197144 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:32:44.197216 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:32:44.226209 3121455 cri.go:89] found id: ""
	I1217 11:32:44.226236 3121455 logs.go:282] 0 containers: []
	W1217 11:32:44.226245 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:32:44.226252 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:32:44.226333 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:32:44.259343 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:44.259376 3121455 cri.go:89] found id: ""
	I1217 11:32:44.259384 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:32:44.259456 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:44.263732 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:32:44.263804 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:32:44.295033 3121455 cri.go:89] found id: ""
	I1217 11:32:44.295059 3121455 logs.go:282] 0 containers: []
	W1217 11:32:44.295071 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:32:44.295077 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:32:44.295134 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:32:44.325085 3121455 cri.go:89] found id: ""
	I1217 11:32:44.325112 3121455 logs.go:282] 0 containers: []
	W1217 11:32:44.325121 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:32:44.325134 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:32:44.325145 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:32:44.397313 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:32:44.397334 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:32:44.397351 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:44.440607 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:32:44.440640 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:44.473728 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:32:44.473761 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:32:44.504047 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:32:44.504074 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:32:44.561543 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:32:44.561578 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:32:44.578068 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:32:44.578101 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:44.619822 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:32:44.619853 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:44.657904 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:32:44.657934 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:32:47.188780 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:32:47.199259 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:32:47.199343 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:32:47.225548 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:47.225573 3121455 cri.go:89] found id: ""
	I1217 11:32:47.225581 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:32:47.225638 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:47.229549 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:32:47.229626 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:32:47.263167 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:47.263192 3121455 cri.go:89] found id: ""
	I1217 11:32:47.263201 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:32:47.263258 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:47.267989 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:32:47.268061 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:32:47.296477 3121455 cri.go:89] found id: ""
	I1217 11:32:47.296499 3121455 logs.go:282] 0 containers: []
	W1217 11:32:47.296508 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:32:47.296514 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:32:47.296609 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:32:47.327453 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:47.327472 3121455 cri.go:89] found id: ""
	I1217 11:32:47.327480 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:32:47.327535 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:47.331434 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:32:47.331505 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:32:47.359313 3121455 cri.go:89] found id: ""
	I1217 11:32:47.359338 3121455 logs.go:282] 0 containers: []
	W1217 11:32:47.359346 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:32:47.359353 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:32:47.359413 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:32:47.386071 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:47.386136 3121455 cri.go:89] found id: ""
	I1217 11:32:47.386158 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:32:47.386240 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:47.389956 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:32:47.390027 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:32:47.415237 3121455 cri.go:89] found id: ""
	I1217 11:32:47.415262 3121455 logs.go:282] 0 containers: []
	W1217 11:32:47.415272 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:32:47.415279 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:32:47.415336 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:32:47.441396 3121455 cri.go:89] found id: ""
	I1217 11:32:47.441434 3121455 logs.go:282] 0 containers: []
	W1217 11:32:47.441444 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:32:47.441461 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:32:47.441472 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:32:47.499589 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:32:47.499627 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:32:47.516917 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:32:47.516949 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:47.560567 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:32:47.560607 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:47.594222 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:32:47.594256 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:47.632577 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:32:47.632609 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:32:47.662093 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:32:47.662131 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:32:47.693946 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:32:47.693973 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:32:47.772633 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:32:47.772655 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:32:47.772668 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:50.311074 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:32:50.321107 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:32:50.321178 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:32:50.347188 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:50.347212 3121455 cri.go:89] found id: ""
	I1217 11:32:50.347221 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:32:50.347274 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:50.350892 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:32:50.350965 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:32:50.379353 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:50.379373 3121455 cri.go:89] found id: ""
	I1217 11:32:50.379381 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:32:50.379441 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:50.383211 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:32:50.383296 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:32:50.408798 3121455 cri.go:89] found id: ""
	I1217 11:32:50.408825 3121455 logs.go:282] 0 containers: []
	W1217 11:32:50.408835 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:32:50.408841 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:32:50.408901 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:32:50.435288 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:50.435311 3121455 cri.go:89] found id: ""
	I1217 11:32:50.435320 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:32:50.435376 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:50.439117 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:32:50.439192 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:32:50.465600 3121455 cri.go:89] found id: ""
	I1217 11:32:50.465631 3121455 logs.go:282] 0 containers: []
	W1217 11:32:50.465641 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:32:50.465648 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:32:50.465713 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:32:50.491557 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:50.491580 3121455 cri.go:89] found id: ""
	I1217 11:32:50.491590 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:32:50.491679 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:50.495266 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:32:50.495370 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:32:50.521317 3121455 cri.go:89] found id: ""
	I1217 11:32:50.521343 3121455 logs.go:282] 0 containers: []
	W1217 11:32:50.521352 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:32:50.521358 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:32:50.521469 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:32:50.546582 3121455 cri.go:89] found id: ""
	I1217 11:32:50.546606 3121455 logs.go:282] 0 containers: []
	W1217 11:32:50.546616 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:32:50.546629 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:32:50.546672 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:32:50.608508 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:32:50.608543 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:50.663614 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:32:50.663646 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:32:50.694242 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:32:50.694287 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:32:50.728261 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:32:50.728293 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:32:50.746036 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:32:50.746068 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:32:50.811254 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:32:50.811276 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:32:50.811289 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:50.847474 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:32:50.847507 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:50.879715 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:32:50.879747 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:53.419942 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:32:53.430267 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:32:53.430340 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:32:53.455659 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:53.455678 3121455 cri.go:89] found id: ""
	I1217 11:32:53.455686 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:32:53.455742 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:53.459448 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:32:53.459529 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:32:53.486034 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:53.486060 3121455 cri.go:89] found id: ""
	I1217 11:32:53.486068 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:32:53.486125 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:53.490109 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:32:53.490216 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:32:53.516982 3121455 cri.go:89] found id: ""
	I1217 11:32:53.517007 3121455 logs.go:282] 0 containers: []
	W1217 11:32:53.517016 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:32:53.517023 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:32:53.517080 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:32:53.542148 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:53.542171 3121455 cri.go:89] found id: ""
	I1217 11:32:53.542180 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:32:53.542235 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:53.546024 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:32:53.546106 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:32:53.574667 3121455 cri.go:89] found id: ""
	I1217 11:32:53.574692 3121455 logs.go:282] 0 containers: []
	W1217 11:32:53.574701 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:32:53.574708 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:32:53.574781 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:32:53.603111 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:53.603135 3121455 cri.go:89] found id: ""
	I1217 11:32:53.603144 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:32:53.603200 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:53.607483 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:32:53.607557 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:32:53.634266 3121455 cri.go:89] found id: ""
	I1217 11:32:53.634297 3121455 logs.go:282] 0 containers: []
	W1217 11:32:53.634306 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:32:53.634313 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:32:53.634384 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:32:53.659896 3121455 cri.go:89] found id: ""
	I1217 11:32:53.659976 3121455 logs.go:282] 0 containers: []
	W1217 11:32:53.660001 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:32:53.660048 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:32:53.660096 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:53.694383 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:32:53.694420 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:53.732724 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:32:53.732758 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:53.773222 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:32:53.773256 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:32:53.789695 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:32:53.789725 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:32:53.860494 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:32:53.860516 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:32:53.860530 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:32:53.889547 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:32:53.889582 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:32:53.921195 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:32:53.921225 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:32:53.988494 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:32:53.988533 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:56.559816 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:32:56.570310 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:32:56.570390 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:32:56.600062 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:56.600087 3121455 cri.go:89] found id: ""
	I1217 11:32:56.600095 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:32:56.600162 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:56.604298 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:32:56.604364 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:32:56.633950 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:56.633975 3121455 cri.go:89] found id: ""
	I1217 11:32:56.633983 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:32:56.634058 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:56.637837 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:32:56.637909 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:32:56.662127 3121455 cri.go:89] found id: ""
	I1217 11:32:56.662154 3121455 logs.go:282] 0 containers: []
	W1217 11:32:56.662163 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:32:56.662169 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:32:56.662228 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:32:56.688348 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:56.688368 3121455 cri.go:89] found id: ""
	I1217 11:32:56.688376 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:32:56.688518 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:56.692348 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:32:56.692497 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:32:56.724042 3121455 cri.go:89] found id: ""
	I1217 11:32:56.724115 3121455 logs.go:282] 0 containers: []
	W1217 11:32:56.724139 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:32:56.724161 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:32:56.724248 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:32:56.750000 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:56.750025 3121455 cri.go:89] found id: ""
	I1217 11:32:56.750042 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:32:56.750102 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:56.754239 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:32:56.754322 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:32:56.783911 3121455 cri.go:89] found id: ""
	I1217 11:32:56.783988 3121455 logs.go:282] 0 containers: []
	W1217 11:32:56.784012 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:32:56.784040 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:32:56.784122 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:32:56.809194 3121455 cri.go:89] found id: ""
	I1217 11:32:56.809259 3121455 logs.go:282] 0 containers: []
	W1217 11:32:56.809283 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:32:56.809312 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:32:56.809340 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:32:56.867658 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:32:56.867695 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:32:56.886456 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:32:56.886485 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:32:56.957755 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:32:56.957822 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:32:56.957844 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:56.991576 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:32:56.991606 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:57.036225 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:32:57.036298 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:32:57.070750 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:32:57.070828 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:57.107375 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:32:57.107406 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:57.141592 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:32:57.141628 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:32:59.669657 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:32:59.680335 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:32:59.680412 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:32:59.709047 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:59.709074 3121455 cri.go:89] found id: ""
	I1217 11:32:59.709083 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:32:59.709145 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:59.713082 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:32:59.713157 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:32:59.740534 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:32:59.740556 3121455 cri.go:89] found id: ""
	I1217 11:32:59.740572 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:32:59.740632 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:59.744265 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:32:59.744343 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:32:59.769625 3121455 cri.go:89] found id: ""
	I1217 11:32:59.769647 3121455 logs.go:282] 0 containers: []
	W1217 11:32:59.769656 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:32:59.769662 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:32:59.769720 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:32:59.795712 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:32:59.795734 3121455 cri.go:89] found id: ""
	I1217 11:32:59.795743 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:32:59.795803 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:59.800036 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:32:59.800139 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:32:59.825604 3121455 cri.go:89] found id: ""
	I1217 11:32:59.825679 3121455 logs.go:282] 0 containers: []
	W1217 11:32:59.825704 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:32:59.825724 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:32:59.825814 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:32:59.855509 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:32:59.855529 3121455 cri.go:89] found id: ""
	I1217 11:32:59.855537 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:32:59.855599 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:32:59.859076 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:32:59.859149 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:32:59.887857 3121455 cri.go:89] found id: ""
	I1217 11:32:59.887883 3121455 logs.go:282] 0 containers: []
	W1217 11:32:59.887892 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:32:59.887899 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:32:59.887960 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:32:59.913451 3121455 cri.go:89] found id: ""
	I1217 11:32:59.913475 3121455 logs.go:282] 0 containers: []
	W1217 11:32:59.913484 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:32:59.913500 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:32:59.913512 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:32:59.942057 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:32:59.942091 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:32:59.987314 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:32:59.987346 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:33:00.149577 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:33:00.149638 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:33:00.270693 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:33:00.270727 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:33:00.360462 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:33:00.360508 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:33:00.379356 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:33:00.379475 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:33:00.458964 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:33:00.458985 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:33:00.458998 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:33:00.496265 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:33:00.496300 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:33:03.032600 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:33:03.043037 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:33:03.043110 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:33:03.069960 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:33:03.069985 3121455 cri.go:89] found id: ""
	I1217 11:33:03.069994 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:33:03.070054 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:03.074004 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:33:03.074079 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:33:03.099510 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:33:03.099531 3121455 cri.go:89] found id: ""
	I1217 11:33:03.099539 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:33:03.099595 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:03.103824 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:33:03.103900 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:33:03.129776 3121455 cri.go:89] found id: ""
	I1217 11:33:03.129802 3121455 logs.go:282] 0 containers: []
	W1217 11:33:03.129810 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:33:03.129817 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:33:03.129884 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:33:03.158911 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:33:03.158935 3121455 cri.go:89] found id: ""
	I1217 11:33:03.158944 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:33:03.159001 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:03.162593 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:33:03.162685 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:33:03.189900 3121455 cri.go:89] found id: ""
	I1217 11:33:03.189923 3121455 logs.go:282] 0 containers: []
	W1217 11:33:03.189960 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:33:03.189972 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:33:03.190056 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:33:03.215293 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:33:03.215318 3121455 cri.go:89] found id: ""
	I1217 11:33:03.215333 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:33:03.215390 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:03.219024 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:33:03.219102 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:33:03.251181 3121455 cri.go:89] found id: ""
	I1217 11:33:03.251209 3121455 logs.go:282] 0 containers: []
	W1217 11:33:03.251218 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:33:03.251225 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:33:03.251284 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:33:03.288594 3121455 cri.go:89] found id: ""
	I1217 11:33:03.288622 3121455 logs.go:282] 0 containers: []
	W1217 11:33:03.288631 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:33:03.288645 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:33:03.288659 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:33:03.327114 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:33:03.327145 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:33:03.358421 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:33:03.358456 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:33:03.387852 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:33:03.387882 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:33:03.405168 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:33:03.405200 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:33:03.451168 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:33:03.451203 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:33:03.489353 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:33:03.489386 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:33:03.547376 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:33:03.547418 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:33:03.618123 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:33:03.618144 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:33:03.618160 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:33:06.152637 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:33:06.163435 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:33:06.163507 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:33:06.190914 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:33:06.190938 3121455 cri.go:89] found id: ""
	I1217 11:33:06.190947 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:33:06.191008 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:06.194774 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:33:06.194845 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:33:06.222831 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:33:06.222862 3121455 cri.go:89] found id: ""
	I1217 11:33:06.222873 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:33:06.222931 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:06.226599 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:33:06.226676 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:33:06.257865 3121455 cri.go:89] found id: ""
	I1217 11:33:06.257888 3121455 logs.go:282] 0 containers: []
	W1217 11:33:06.257897 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:33:06.257903 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:33:06.257963 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:33:06.285867 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:33:06.285890 3121455 cri.go:89] found id: ""
	I1217 11:33:06.285899 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:33:06.285952 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:06.289934 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:33:06.290006 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:33:06.322150 3121455 cri.go:89] found id: ""
	I1217 11:33:06.322216 3121455 logs.go:282] 0 containers: []
	W1217 11:33:06.322232 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:33:06.322239 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:33:06.322305 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:33:06.347211 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:33:06.347235 3121455 cri.go:89] found id: ""
	I1217 11:33:06.347243 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:33:06.347324 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:06.350934 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:33:06.351005 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:33:06.387699 3121455 cri.go:89] found id: ""
	I1217 11:33:06.387731 3121455 logs.go:282] 0 containers: []
	W1217 11:33:06.387741 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:33:06.387747 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:33:06.387848 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:33:06.413010 3121455 cri.go:89] found id: ""
	I1217 11:33:06.413033 3121455 logs.go:282] 0 containers: []
	W1217 11:33:06.413041 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:33:06.413056 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:33:06.413067 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:33:06.429373 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:33:06.429413 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:33:06.495902 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:33:06.495925 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:33:06.495939 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:33:06.534952 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:33:06.534990 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:33:06.568673 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:33:06.568702 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:33:06.601057 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:33:06.601090 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:33:06.636130 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:33:06.636162 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:33:06.670112 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:33:06.670146 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:33:06.700956 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:33:06.700991 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:33:09.259778 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:33:09.271866 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:33:09.271938 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:33:09.300503 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:33:09.300523 3121455 cri.go:89] found id: ""
	I1217 11:33:09.300531 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:33:09.300633 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:09.305450 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:33:09.305522 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:33:09.349514 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:33:09.349545 3121455 cri.go:89] found id: ""
	I1217 11:33:09.349554 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:33:09.349635 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:09.353308 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:33:09.353379 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:33:09.379161 3121455 cri.go:89] found id: ""
	I1217 11:33:09.379184 3121455 logs.go:282] 0 containers: []
	W1217 11:33:09.379192 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:33:09.379199 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:33:09.379261 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:33:09.404656 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:33:09.404682 3121455 cri.go:89] found id: ""
	I1217 11:33:09.404690 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:33:09.404760 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:09.408286 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:33:09.408368 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:33:09.434122 3121455 cri.go:89] found id: ""
	I1217 11:33:09.434189 3121455 logs.go:282] 0 containers: []
	W1217 11:33:09.434205 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:33:09.434213 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:33:09.434277 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:33:09.460122 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:33:09.460156 3121455 cri.go:89] found id: ""
	I1217 11:33:09.460165 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:33:09.460230 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:09.463981 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:33:09.464055 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:33:09.491405 3121455 cri.go:89] found id: ""
	I1217 11:33:09.491428 3121455 logs.go:282] 0 containers: []
	W1217 11:33:09.491438 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:33:09.491444 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:33:09.491503 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:33:09.518393 3121455 cri.go:89] found id: ""
	I1217 11:33:09.518418 3121455 logs.go:282] 0 containers: []
	W1217 11:33:09.518427 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:33:09.518442 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:33:09.518454 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:33:09.587909 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:33:09.587930 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:33:09.587944 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:33:09.626558 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:33:09.626588 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:33:09.658485 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:33:09.658555 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:33:09.696499 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:33:09.696531 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:33:09.755506 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:33:09.755540 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:33:09.801479 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:33:09.801512 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:33:09.837949 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:33:09.837987 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:33:09.868120 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:33:09.868155 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:33:12.384906 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:33:12.395714 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:33:12.395790 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:33:12.421742 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:33:12.421764 3121455 cri.go:89] found id: ""
	I1217 11:33:12.421772 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:33:12.421832 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:12.425463 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:33:12.425543 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:33:12.450441 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:33:12.450461 3121455 cri.go:89] found id: ""
	I1217 11:33:12.450469 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:33:12.450524 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:12.454376 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:33:12.454453 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:33:12.479673 3121455 cri.go:89] found id: ""
	I1217 11:33:12.479697 3121455 logs.go:282] 0 containers: []
	W1217 11:33:12.479707 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:33:12.479713 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:33:12.479771 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:33:12.506491 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:33:12.506512 3121455 cri.go:89] found id: ""
	I1217 11:33:12.506521 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:33:12.506583 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:12.510301 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:33:12.510375 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:33:12.534632 3121455 cri.go:89] found id: ""
	I1217 11:33:12.534655 3121455 logs.go:282] 0 containers: []
	W1217 11:33:12.534664 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:33:12.534671 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:33:12.534732 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:33:12.562945 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:33:12.563010 3121455 cri.go:89] found id: ""
	I1217 11:33:12.563036 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:33:12.563108 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:12.566861 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:33:12.566948 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:33:12.596384 3121455 cri.go:89] found id: ""
	I1217 11:33:12.596405 3121455 logs.go:282] 0 containers: []
	W1217 11:33:12.596445 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:33:12.596454 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:33:12.596515 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:33:12.625433 3121455 cri.go:89] found id: ""
	I1217 11:33:12.625456 3121455 logs.go:282] 0 containers: []
	W1217 11:33:12.625464 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:33:12.625477 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:33:12.625490 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:33:12.662766 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:33:12.662795 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:33:12.694274 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:33:12.694305 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:33:12.729771 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:33:12.729802 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:33:12.790404 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:33:12.790441 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:33:12.859487 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:33:12.859507 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:33:12.859519 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:33:12.896760 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:33:12.896791 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:33:12.925101 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:33:12.925136 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:33:12.967733 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:33:12.967762 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:33:15.484087 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:33:15.497913 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:33:15.498002 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:33:15.551015 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:33:15.551036 3121455 cri.go:89] found id: ""
	I1217 11:33:15.551044 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:33:15.551108 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:15.555602 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:33:15.555678 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:33:15.590838 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:33:15.590857 3121455 cri.go:89] found id: ""
	I1217 11:33:15.590865 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:33:15.590924 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:15.595333 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:33:15.595461 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:33:15.623557 3121455 cri.go:89] found id: ""
	I1217 11:33:15.623629 3121455 logs.go:282] 0 containers: []
	W1217 11:33:15.623653 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:33:15.623674 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:33:15.623797 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:33:15.666553 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:33:15.666621 3121455 cri.go:89] found id: ""
	I1217 11:33:15.666644 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:33:15.666734 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:15.670735 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:33:15.670877 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:33:15.702834 3121455 cri.go:89] found id: ""
	I1217 11:33:15.702913 3121455 logs.go:282] 0 containers: []
	W1217 11:33:15.702936 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:33:15.702955 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:33:15.703045 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:33:15.738082 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:33:15.738154 3121455 cri.go:89] found id: ""
	I1217 11:33:15.738184 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:33:15.738272 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:15.742162 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:33:15.742271 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:33:15.769720 3121455 cri.go:89] found id: ""
	I1217 11:33:15.769797 3121455 logs.go:282] 0 containers: []
	W1217 11:33:15.769819 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:33:15.769840 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:33:15.769924 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:33:15.797910 3121455 cri.go:89] found id: ""
	I1217 11:33:15.797984 3121455 logs.go:282] 0 containers: []
	W1217 11:33:15.798008 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:33:15.798036 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:33:15.798092 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:33:15.857150 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:33:15.857184 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:33:15.911950 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:33:15.911979 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:33:15.928828 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:33:15.928857 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:33:15.986486 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:33:15.986521 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:33:16.056243 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:33:16.056274 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:33:16.105942 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:33:16.105982 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:33:16.181909 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:33:16.181945 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:33:16.278080 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:33:16.278102 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:33:16.278115 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:33:18.831755 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:33:18.842264 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:33:18.842343 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:33:18.871790 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:33:18.871814 3121455 cri.go:89] found id: ""
	I1217 11:33:18.871822 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:33:18.871883 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:18.875652 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:33:18.875732 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:33:18.902473 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:33:18.902500 3121455 cri.go:89] found id: ""
	I1217 11:33:18.902515 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:33:18.902588 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:18.906764 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:33:18.906837 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:33:18.933605 3121455 cri.go:89] found id: ""
	I1217 11:33:18.933627 3121455 logs.go:282] 0 containers: []
	W1217 11:33:18.933636 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:33:18.933644 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:33:18.933703 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:33:18.962539 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:33:18.962559 3121455 cri.go:89] found id: ""
	I1217 11:33:18.962568 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:33:18.962631 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:18.966366 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:33:18.966442 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:33:18.994230 3121455 cri.go:89] found id: ""
	I1217 11:33:18.994255 3121455 logs.go:282] 0 containers: []
	W1217 11:33:18.994265 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:33:18.994272 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:33:18.994333 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:33:19.037764 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:33:19.037790 3121455 cri.go:89] found id: ""
	I1217 11:33:19.037800 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:33:19.037861 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:19.042266 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:33:19.042348 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:33:19.069802 3121455 cri.go:89] found id: ""
	I1217 11:33:19.069824 3121455 logs.go:282] 0 containers: []
	W1217 11:33:19.069833 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:33:19.069839 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:33:19.069899 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:33:19.099973 3121455 cri.go:89] found id: ""
	I1217 11:33:19.100000 3121455 logs.go:282] 0 containers: []
	W1217 11:33:19.100010 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:33:19.100027 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:33:19.100040 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:33:19.137139 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:33:19.137178 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:33:19.171465 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:33:19.171497 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:33:19.202270 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:33:19.202307 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:33:19.273668 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:33:19.273707 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:33:19.292970 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:33:19.293011 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:33:19.371261 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:33:19.371340 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:33:19.371361 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:33:19.406237 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:33:19.406273 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:33:19.440478 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:33:19.440565 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:33:21.978877 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:33:21.989066 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:33:21.989139 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:33:22.033710 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:33:22.033785 3121455 cri.go:89] found id: ""
	I1217 11:33:22.033807 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:33:22.033897 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:22.038376 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:33:22.038501 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:33:22.068247 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:33:22.068318 3121455 cri.go:89] found id: ""
	I1217 11:33:22.068347 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:33:22.068465 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:22.073011 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:33:22.073138 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:33:22.102343 3121455 cri.go:89] found id: ""
	I1217 11:33:22.102369 3121455 logs.go:282] 0 containers: []
	W1217 11:33:22.102380 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:33:22.102386 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:33:22.102448 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:33:22.133572 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:33:22.133596 3121455 cri.go:89] found id: ""
	I1217 11:33:22.133604 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:33:22.133691 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:22.137540 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:33:22.137616 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:33:22.163421 3121455 cri.go:89] found id: ""
	I1217 11:33:22.163445 3121455 logs.go:282] 0 containers: []
	W1217 11:33:22.163454 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:33:22.163461 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:33:22.163524 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:33:22.195859 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:33:22.195884 3121455 cri.go:89] found id: ""
	I1217 11:33:22.195893 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:33:22.195950 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:22.199968 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:33:22.200046 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:33:22.226552 3121455 cri.go:89] found id: ""
	I1217 11:33:22.226578 3121455 logs.go:282] 0 containers: []
	W1217 11:33:22.226587 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:33:22.226594 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:33:22.226656 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:33:22.256926 3121455 cri.go:89] found id: ""
	I1217 11:33:22.256996 3121455 logs.go:282] 0 containers: []
	W1217 11:33:22.257023 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:33:22.257049 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:33:22.257080 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:33:22.316803 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:33:22.316839 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:33:22.384247 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:33:22.384269 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:33:22.384282 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:33:22.428727 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:33:22.428762 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:33:22.462370 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:33:22.462405 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:33:22.508783 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:33:22.508810 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:33:22.526141 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:33:22.526172 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:33:22.559605 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:33:22.559637 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:33:22.595489 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:33:22.595522 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:33:25.128953 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:33:25.139046 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:33:25.139114 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:33:25.166736 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:33:25.166762 3121455 cri.go:89] found id: ""
	I1217 11:33:25.166771 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:33:25.166828 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:25.170450 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:33:25.170531 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:33:25.199377 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:33:25.199402 3121455 cri.go:89] found id: ""
	I1217 11:33:25.199411 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:33:25.199466 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:25.203252 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:33:25.203329 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:33:25.227763 3121455 cri.go:89] found id: ""
	I1217 11:33:25.227788 3121455 logs.go:282] 0 containers: []
	W1217 11:33:25.227796 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:33:25.227802 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:33:25.227861 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:33:25.256020 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:33:25.256044 3121455 cri.go:89] found id: ""
	I1217 11:33:25.256053 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:33:25.256110 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:25.259890 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:33:25.259965 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:33:25.285462 3121455 cri.go:89] found id: ""
	I1217 11:33:25.285492 3121455 logs.go:282] 0 containers: []
	W1217 11:33:25.285500 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:33:25.285507 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:33:25.285568 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:33:25.311186 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:33:25.311205 3121455 cri.go:89] found id: ""
	I1217 11:33:25.311212 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:33:25.311267 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:25.315166 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:33:25.315234 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:33:25.340837 3121455 cri.go:89] found id: ""
	I1217 11:33:25.340860 3121455 logs.go:282] 0 containers: []
	W1217 11:33:25.340869 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:33:25.340875 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:33:25.340934 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:33:25.365713 3121455 cri.go:89] found id: ""
	I1217 11:33:25.365736 3121455 logs.go:282] 0 containers: []
	W1217 11:33:25.365745 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:33:25.365758 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:33:25.365769 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:33:25.398959 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:33:25.398989 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:33:25.437489 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:33:25.437522 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:33:25.470604 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:33:25.470640 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:33:25.498218 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:33:25.498254 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:33:25.554985 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:33:25.555026 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:33:25.571769 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:33:25.571806 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:33:25.641188 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:33:25.641210 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:33:25.641224 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:33:25.675333 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:33:25.675367 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:33:28.206313 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:33:28.219952 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:33:28.220023 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:33:28.259944 3121455 cri.go:89] found id: "d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:33:28.259968 3121455 cri.go:89] found id: ""
	I1217 11:33:28.259976 3121455 logs.go:282] 1 containers: [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770]
	I1217 11:33:28.260039 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:28.264151 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:33:28.264226 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:33:28.300343 3121455 cri.go:89] found id: "09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:33:28.300368 3121455 cri.go:89] found id: ""
	I1217 11:33:28.300377 3121455 logs.go:282] 1 containers: [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43]
	I1217 11:33:28.300448 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:28.304211 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:33:28.304286 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:33:28.339078 3121455 cri.go:89] found id: ""
	I1217 11:33:28.339103 3121455 logs.go:282] 0 containers: []
	W1217 11:33:28.339112 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:33:28.339119 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:33:28.339180 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:33:28.371546 3121455 cri.go:89] found id: "a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:33:28.371568 3121455 cri.go:89] found id: ""
	I1217 11:33:28.371578 3121455 logs.go:282] 1 containers: [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e]
	I1217 11:33:28.371641 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:28.376736 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:33:28.376810 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:33:28.407083 3121455 cri.go:89] found id: ""
	I1217 11:33:28.407106 3121455 logs.go:282] 0 containers: []
	W1217 11:33:28.407116 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:33:28.407122 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:33:28.407183 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:33:28.437525 3121455 cri.go:89] found id: "8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:33:28.437583 3121455 cri.go:89] found id: ""
	I1217 11:33:28.437598 3121455 logs.go:282] 1 containers: [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249]
	I1217 11:33:28.437666 3121455 ssh_runner.go:195] Run: which crictl
	I1217 11:33:28.441465 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:33:28.441539 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:33:28.466701 3121455 cri.go:89] found id: ""
	I1217 11:33:28.466729 3121455 logs.go:282] 0 containers: []
	W1217 11:33:28.466739 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:33:28.466745 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:33:28.466806 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:33:28.492783 3121455 cri.go:89] found id: ""
	I1217 11:33:28.492805 3121455 logs.go:282] 0 containers: []
	W1217 11:33:28.492814 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:33:28.492832 3121455 logs.go:123] Gathering logs for kube-apiserver [d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770] ...
	I1217 11:33:28.492848 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770"
	I1217 11:33:28.531145 3121455 logs.go:123] Gathering logs for etcd [09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43] ...
	I1217 11:33:28.531178 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43"
	I1217 11:33:28.567860 3121455 logs.go:123] Gathering logs for kube-controller-manager [8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249] ...
	I1217 11:33:28.567892 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249"
	I1217 11:33:28.601349 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:33:28.601383 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:33:28.634395 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:33:28.634425 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:33:28.650549 3121455 logs.go:123] Gathering logs for kube-scheduler [a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e] ...
	I1217 11:33:28.650580 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e"
	I1217 11:33:28.687671 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:33:28.687702 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:33:28.723354 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:33:28.723396 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:33:28.798378 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:33:28.798421 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:33:28.878673 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:33:31.379734 3121455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:33:31.389649 3121455 kubeadm.go:602] duration metric: took 4m2.155088947s to restartPrimaryControlPlane
	W1217 11:33:31.389716 3121455 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 11:33:31.389780 3121455 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 11:33:31.920690 3121455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:33:31.934944 3121455 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 11:33:31.943972 3121455 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 11:33:31.944034 3121455 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:33:31.954643 3121455 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 11:33:31.954665 3121455 kubeadm.go:158] found existing configuration files:
	
	I1217 11:33:31.954718 3121455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 11:33:31.962699 3121455 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 11:33:31.962764 3121455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 11:33:31.970547 3121455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 11:33:31.978652 3121455 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 11:33:31.978718 3121455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 11:33:31.986323 3121455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 11:33:31.994665 3121455 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 11:33:31.994733 3121455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:33:32.005903 3121455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 11:33:32.014735 3121455 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 11:33:32.014829 3121455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:33:32.022680 3121455 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 11:33:32.073472 3121455 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 11:33:32.073760 3121455 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 11:33:32.146165 3121455 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 11:33:32.146240 3121455 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 11:33:32.146276 3121455 kubeadm.go:319] OS: Linux
	I1217 11:33:32.146325 3121455 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 11:33:32.146374 3121455 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 11:33:32.146423 3121455 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 11:33:32.146472 3121455 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 11:33:32.146521 3121455 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 11:33:32.146573 3121455 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 11:33:32.146619 3121455 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 11:33:32.146669 3121455 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 11:33:32.146716 3121455 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 11:33:32.215870 3121455 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 11:33:32.215982 3121455 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 11:33:32.216073 3121455 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 11:33:41.496481 3121455 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 11:33:41.499564 3121455 out.go:252]   - Generating certificates and keys ...
	I1217 11:33:41.499652 3121455 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 11:33:41.499726 3121455 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 11:33:41.499799 3121455 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 11:33:41.499856 3121455 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 11:33:41.499944 3121455 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 11:33:41.499997 3121455 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 11:33:41.500369 3121455 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 11:33:41.500488 3121455 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 11:33:41.500590 3121455 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 11:33:41.500894 3121455 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 11:33:41.500936 3121455 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 11:33:41.500989 3121455 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 11:33:42.126272 3121455 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 11:33:42.397185 3121455 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 11:33:42.698548 3121455 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 11:33:42.938230 3121455 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 11:33:43.285593 3121455 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 11:33:43.286307 3121455 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 11:33:43.288907 3121455 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 11:33:43.292465 3121455 out.go:252]   - Booting up control plane ...
	I1217 11:33:43.292569 3121455 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 11:33:43.292643 3121455 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 11:33:43.292709 3121455 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 11:33:43.307402 3121455 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 11:33:43.307514 3121455 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 11:33:43.318092 3121455 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 11:33:43.319137 3121455 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 11:33:43.320404 3121455 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 11:33:43.441817 3121455 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 11:33:43.441956 3121455 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 11:37:43.442198 3121455 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000702827s
	I1217 11:37:43.442230 3121455 kubeadm.go:319] 
	I1217 11:37:43.442287 3121455 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 11:37:43.442320 3121455 kubeadm.go:319] 	- The kubelet is not running
	I1217 11:37:43.442424 3121455 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 11:37:43.442430 3121455 kubeadm.go:319] 
	I1217 11:37:43.442534 3121455 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 11:37:43.442567 3121455 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 11:37:43.442598 3121455 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 11:37:43.442602 3121455 kubeadm.go:319] 
	I1217 11:37:43.446624 3121455 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 11:37:43.447050 3121455 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 11:37:43.447162 3121455 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:37:43.447400 3121455 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 11:37:43.447410 3121455 kubeadm.go:319] 
	I1217 11:37:43.447479 3121455 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1217 11:37:43.447605 3121455 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000702827s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000702827s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 11:37:43.447683 3121455 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 11:37:43.861098 3121455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:37:43.874294 3121455 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 11:37:43.874360 3121455 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:37:43.882355 3121455 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 11:37:43.882386 3121455 kubeadm.go:158] found existing configuration files:
	
	I1217 11:37:43.882440 3121455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 11:37:43.890124 3121455 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 11:37:43.890189 3121455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 11:37:43.897545 3121455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 11:37:43.905232 3121455 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 11:37:43.905298 3121455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 11:37:43.912954 3121455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 11:37:43.920394 3121455 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 11:37:43.920491 3121455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:37:43.927756 3121455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 11:37:43.935348 3121455 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 11:37:43.935411 3121455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:37:43.943233 3121455 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 11:37:43.979411 3121455 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 11:37:43.979494 3121455 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 11:37:44.059286 3121455 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 11:37:44.059370 3121455 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 11:37:44.059410 3121455 kubeadm.go:319] OS: Linux
	I1217 11:37:44.059459 3121455 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 11:37:44.059512 3121455 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 11:37:44.059568 3121455 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 11:37:44.059624 3121455 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 11:37:44.059686 3121455 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 11:37:44.059761 3121455 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 11:37:44.059815 3121455 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 11:37:44.059872 3121455 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 11:37:44.059925 3121455 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 11:37:44.134971 3121455 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 11:37:44.135115 3121455 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 11:37:44.135230 3121455 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 11:37:44.144855 3121455 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 11:37:44.149765 3121455 out.go:252]   - Generating certificates and keys ...
	I1217 11:37:44.149855 3121455 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 11:37:44.149919 3121455 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 11:37:44.149995 3121455 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 11:37:44.150055 3121455 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 11:37:44.150125 3121455 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 11:37:44.150178 3121455 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 11:37:44.150244 3121455 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 11:37:44.150321 3121455 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 11:37:44.150408 3121455 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 11:37:44.150482 3121455 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 11:37:44.150520 3121455 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 11:37:44.150577 3121455 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 11:37:44.430223 3121455 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 11:37:44.525650 3121455 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 11:37:44.907872 3121455 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 11:37:45.356792 3121455 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 11:37:45.817324 3121455 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 11:37:45.818244 3121455 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 11:37:45.821134 3121455 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 11:37:45.824546 3121455 out.go:252]   - Booting up control plane ...
	I1217 11:37:45.824697 3121455 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 11:37:45.825169 3121455 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 11:37:45.827267 3121455 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 11:37:45.848961 3121455 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 11:37:45.849071 3121455 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 11:37:45.857350 3121455 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 11:37:45.857785 3121455 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 11:37:45.858024 3121455 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 11:37:45.994209 3121455 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 11:37:45.994322 3121455 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 11:41:45.995011 3121455 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000850203s
	I1217 11:41:45.995170 3121455 kubeadm.go:319] 
	I1217 11:41:45.995243 3121455 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 11:41:45.995283 3121455 kubeadm.go:319] 	- The kubelet is not running
	I1217 11:41:45.995388 3121455 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 11:41:45.995397 3121455 kubeadm.go:319] 
	I1217 11:41:45.995500 3121455 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 11:41:45.995537 3121455 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 11:41:45.995572 3121455 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 11:41:45.995580 3121455 kubeadm.go:319] 
	I1217 11:41:46.000737 3121455 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 11:41:46.001172 3121455 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 11:41:46.001287 3121455 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:41:46.001556 3121455 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 11:41:46.001565 3121455 kubeadm.go:319] 
	I1217 11:41:46.001636 3121455 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 11:41:46.001706 3121455 kubeadm.go:403] duration metric: took 12m16.86095773s to StartCluster
	I1217 11:41:46.001751 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:41:46.001843 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:41:46.043296 3121455 cri.go:89] found id: ""
	I1217 11:41:46.043321 3121455 logs.go:282] 0 containers: []
	W1217 11:41:46.043331 3121455 logs.go:284] No container was found matching "kube-apiserver"
	I1217 11:41:46.043338 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:41:46.043398 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:41:46.084074 3121455 cri.go:89] found id: ""
	I1217 11:41:46.084099 3121455 logs.go:282] 0 containers: []
	W1217 11:41:46.084108 3121455 logs.go:284] No container was found matching "etcd"
	I1217 11:41:46.084114 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:41:46.084181 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:41:46.119857 3121455 cri.go:89] found id: ""
	I1217 11:41:46.119883 3121455 logs.go:282] 0 containers: []
	W1217 11:41:46.119892 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:41:46.119898 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:41:46.119960 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:41:46.153180 3121455 cri.go:89] found id: ""
	I1217 11:41:46.153204 3121455 logs.go:282] 0 containers: []
	W1217 11:41:46.153212 3121455 logs.go:284] No container was found matching "kube-scheduler"
	I1217 11:41:46.153218 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:41:46.153276 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:41:46.187021 3121455 cri.go:89] found id: ""
	I1217 11:41:46.187043 3121455 logs.go:282] 0 containers: []
	W1217 11:41:46.187052 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:41:46.187059 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:41:46.187118 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:41:46.217732 3121455 cri.go:89] found id: ""
	I1217 11:41:46.217753 3121455 logs.go:282] 0 containers: []
	W1217 11:41:46.217762 3121455 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 11:41:46.217769 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:41:46.217830 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:41:46.263035 3121455 cri.go:89] found id: ""
	I1217 11:41:46.263056 3121455 logs.go:282] 0 containers: []
	W1217 11:41:46.263065 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:41:46.263071 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:41:46.263129 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:41:46.304678 3121455 cri.go:89] found id: ""
	I1217 11:41:46.304754 3121455 logs.go:282] 0 containers: []
	W1217 11:41:46.304800 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:41:46.304825 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:41:46.304853 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:41:46.327931 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:41:46.328010 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:41:46.434272 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:41:46.434335 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:41:46.434363 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:41:46.485380 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:41:46.485460 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:41:46.520267 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:41:46.520300 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 11:41:46.584108 3121455 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000850203s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 11:41:46.584222 3121455 out.go:285] * 
	* 
	W1217 11:41:46.584491 3121455 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000850203s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000850203s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 11:41:46.584552 3121455 out.go:285] * 
	* 
	W1217 11:41:46.586816 3121455 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:41:46.592433 3121455 out.go:203] 
	W1217 11:41:46.596188 3121455 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000850203s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000850203s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 11:41:46.596583 3121455 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 11:41:46.596697 3121455 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 11:41:46.601285 3121455 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-452067 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-452067 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-452067 version --output=json: exit status 1 (201.832999ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-17 11:41:48.199246213 +0000 UTC m=+4784.797756974
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-452067
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-452067:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dbb4f7cb5f37276ee2e95cabc55a4e20ce3048d5e17327d033d7ee0109cf7bd8",
	        "Created": "2025-12-17T11:28:44.012105734Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3121587,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:29:13.585311422Z",
	            "FinishedAt": "2025-12-17T11:29:12.645474512Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/dbb4f7cb5f37276ee2e95cabc55a4e20ce3048d5e17327d033d7ee0109cf7bd8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dbb4f7cb5f37276ee2e95cabc55a4e20ce3048d5e17327d033d7ee0109cf7bd8/hostname",
	        "HostsPath": "/var/lib/docker/containers/dbb4f7cb5f37276ee2e95cabc55a4e20ce3048d5e17327d033d7ee0109cf7bd8/hosts",
	        "LogPath": "/var/lib/docker/containers/dbb4f7cb5f37276ee2e95cabc55a4e20ce3048d5e17327d033d7ee0109cf7bd8/dbb4f7cb5f37276ee2e95cabc55a4e20ce3048d5e17327d033d7ee0109cf7bd8-json.log",
	        "Name": "/kubernetes-upgrade-452067",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-452067:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-452067",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dbb4f7cb5f37276ee2e95cabc55a4e20ce3048d5e17327d033d7ee0109cf7bd8",
	                "LowerDir": "/var/lib/docker/overlay2/a0587eafcb2222a41de3b9ee3cbcae8cea94a4487df8ade8089bc389714513d9-init/diff:/var/lib/docker/overlay2/aa1c3cb837db05afa9c265c464cc269fa9c11658f422c1c8858e1287ac952f12/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a0587eafcb2222a41de3b9ee3cbcae8cea94a4487df8ade8089bc389714513d9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a0587eafcb2222a41de3b9ee3cbcae8cea94a4487df8ade8089bc389714513d9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a0587eafcb2222a41de3b9ee3cbcae8cea94a4487df8ade8089bc389714513d9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-452067",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-452067/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-452067",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-452067",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-452067",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f120291c4c6252d8e0f5770279c5848524ead15aca8e9b8b40a3efc640cbaba8",
	            "SandboxKey": "/var/run/docker/netns/f120291c4c62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35963"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35964"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35967"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35965"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35966"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-452067": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:30:a0:fa:90:42",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d68d6e30f6d42ab8b83249c75fc8e7e78e9649c5375ecfe3d0b1ec8a24e18bd6",
	                    "EndpointID": "b1a35d4d331384ed58cbfe610d4ce34d33f35cadbb598736197a4833e2db66ed",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-452067",
	                        "dbb4f7cb5f37"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-452067 -n kubernetes-upgrade-452067
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-452067 -n kubernetes-upgrade-452067: exit status 2 (393.407468ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-452067 logs -n 25
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                       │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-348887 sudo systemctl status kubelet --all --full --no-pager                                           │ cilium-348887            │ jenkins │ v1.37.0 │ 17 Dec 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-348887 sudo systemctl cat kubelet --no-pager                                                           │ cilium-348887            │ jenkins │ v1.37.0 │ 17 Dec 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-348887 sudo journalctl -xeu kubelet --all --full --no-pager                                            │ cilium-348887            │ jenkins │ v1.37.0 │ 17 Dec 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-348887 sudo cat /etc/kubernetes/kubelet.conf                                                           │ cilium-348887            │ jenkins │ v1.37.0 │ 17 Dec 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-348887 sudo cat /var/lib/kubelet/config.yaml                                                           │ cilium-348887            │ jenkins │ v1.37.0 │ 17 Dec 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-348887 sudo systemctl status docker --all --full --no-pager                                            │ cilium-348887            │ jenkins │ v1.37.0 │ 17 Dec 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-348887 sudo systemctl cat docker --no-pager                                                            │ cilium-348887            │ jenkins │ v1.37.0 │ 17 Dec 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-348887 sudo cat /etc/docker/daemon.json                                                                │ cilium-348887            │ jenkins │ v1.37.0 │ 17 Dec 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-348887 sudo docker system info                                                                         │ cilium-348887            │ jenkins │ v1.37.0 │ 17 Dec 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-348887 sudo systemctl status cri-docker --all --full --no-pager                                        │ cilium-348887            │ jenkins │ v1.37.0 │ 17 Dec 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-348887 sudo systemctl cat cri-docker --no-pager                                                        │ cilium-348887            │ jenkins │ v1.37.0 │ 17 Dec 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-348887 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                   │ cilium-348887            │ jenkins │ v1.37.0 │ 17 Dec 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-348887 sudo cat /usr/lib/systemd/system/cri-docker.service                                             │ cilium-348887            │ jenkins │ v1.37.0 │ 17 Dec 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-348887 sudo cri-dockerd --version                                                                      │ cilium-348887            │ jenkins │ v1.37.0 │ 17 Dec 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-348887 sudo systemctl status containerd --all --full --no-pager                                        │ cilium-348887            │ jenkins │ v1.37.0 │ 17 Dec 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-348887 sudo systemctl cat containerd --no-pager                                                        │ cilium-348887            │ jenkins │ v1.37.0 │ 17 Dec 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-348887 sudo cat /lib/systemd/system/containerd.service                                                 │ cilium-348887            │ jenkins │ v1.37.0 │ 17 Dec 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-348887 sudo cat /etc/containerd/config.toml                                                            │ cilium-348887            │ jenkins │ v1.37.0 │ 17 Dec 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-348887 sudo containerd config dump                                                                     │ cilium-348887            │ jenkins │ v1.37.0 │ 17 Dec 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-348887 sudo systemctl status crio --all --full --no-pager                                              │ cilium-348887            │ jenkins │ v1.37.0 │ 17 Dec 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-348887 sudo systemctl cat crio --no-pager                                                              │ cilium-348887            │ jenkins │ v1.37.0 │ 17 Dec 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-348887 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                    │ cilium-348887            │ jenkins │ v1.37.0 │ 17 Dec 25 11:41 UTC │                     │
	│ ssh     │ -p cilium-348887 sudo crio config                                                                                │ cilium-348887            │ jenkins │ v1.37.0 │ 17 Dec 25 11:41 UTC │                     │
	│ delete  │ -p cilium-348887                                                                                                 │ cilium-348887            │ jenkins │ v1.37.0 │ 17 Dec 25 11:41 UTC │ 17 Dec 25 11:41 UTC │
	│ start   │ -p force-systemd-env-085980 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd │ force-systemd-env-085980 │ jenkins │ v1.37.0 │ 17 Dec 25 11:41 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:41:32
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:41:32.219094 3167525 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:41:32.219286 3167525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:41:32.219299 3167525 out.go:374] Setting ErrFile to fd 2...
	I1217 11:41:32.219306 3167525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:41:32.219719 3167525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 11:41:32.220264 3167525 out.go:368] Setting JSON to false
	I1217 11:41:32.221359 3167525 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":62643,"bootTime":1765909050,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 11:41:32.221466 3167525 start.go:143] virtualization:  
	I1217 11:41:32.224857 3167525 out.go:179] * [force-systemd-env-085980] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 11:41:32.228546 3167525 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 11:41:32.228710 3167525 notify.go:221] Checking for updates...
	I1217 11:41:32.234469 3167525 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:41:32.237488 3167525 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 11:41:32.240456 3167525 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 11:41:32.243344 3167525 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 11:41:32.246306 3167525 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1217 11:41:32.249931 3167525 config.go:182] Loaded profile config "kubernetes-upgrade-452067": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:41:32.250045 3167525 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:41:32.278630 3167525 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 11:41:32.278760 3167525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:41:32.347218 3167525 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:41:32.337803202 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:41:32.347333 3167525 docker.go:319] overlay module found
	I1217 11:41:32.350599 3167525 out.go:179] * Using the docker driver based on user configuration
	I1217 11:41:32.353508 3167525 start.go:309] selected driver: docker
	I1217 11:41:32.353533 3167525 start.go:927] validating driver "docker" against <nil>
	I1217 11:41:32.353564 3167525 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:41:32.354310 3167525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:41:32.412917 3167525 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:41:32.403495104 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:41:32.413074 3167525 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 11:41:32.413336 3167525 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 11:41:32.416259 3167525 out.go:179] * Using Docker driver with root privileges
	I1217 11:41:32.419180 3167525 cni.go:84] Creating CNI manager for ""
	I1217 11:41:32.419260 3167525 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 11:41:32.419270 3167525 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 11:41:32.419367 3167525 start.go:353] cluster config:
	{Name:force-systemd-env-085980 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:force-systemd-env-085980 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:41:32.422535 3167525 out.go:179] * Starting "force-systemd-env-085980" primary control-plane node in "force-systemd-env-085980" cluster
	I1217 11:41:32.425385 3167525 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 11:41:32.428438 3167525 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:41:32.431391 3167525 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1217 11:41:32.431447 3167525 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4
	I1217 11:41:32.431458 3167525 cache.go:65] Caching tarball of preloaded images
	I1217 11:41:32.431461 3167525 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:41:32.431566 3167525 preload.go:238] Found /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 11:41:32.431577 3167525 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on containerd
	I1217 11:41:32.431713 3167525 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/config.json ...
	I1217 11:41:32.431745 3167525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/config.json: {Name:mka18539f4dbb1f048112e5d5f3471ca568eaf6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:41:32.452123 3167525 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:41:32.452156 3167525 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 11:41:32.452178 3167525 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:41:32.452213 3167525 start.go:360] acquireMachinesLock for force-systemd-env-085980: {Name:mk0bb6fb9eade09b22466061b62e2b9c922981d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:41:32.452369 3167525 start.go:364] duration metric: took 127.095µs to acquireMachinesLock for "force-systemd-env-085980"
	I1217 11:41:32.452400 3167525 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-085980 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:force-systemd-env-085980 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 11:41:32.452506 3167525 start.go:125] createHost starting for "" (driver="docker")
	I1217 11:41:32.455980 3167525 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 11:41:32.456236 3167525 start.go:159] libmachine.API.Create for "force-systemd-env-085980" (driver="docker")
	I1217 11:41:32.456281 3167525 client.go:173] LocalClient.Create starting
	I1217 11:41:32.456365 3167525 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem
	I1217 11:41:32.456405 3167525 main.go:143] libmachine: Decoding PEM data...
	I1217 11:41:32.456454 3167525 main.go:143] libmachine: Parsing certificate...
	I1217 11:41:32.456516 3167525 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem
	I1217 11:41:32.456547 3167525 main.go:143] libmachine: Decoding PEM data...
	I1217 11:41:32.456559 3167525 main.go:143] libmachine: Parsing certificate...
	I1217 11:41:32.456949 3167525 cli_runner.go:164] Run: docker network inspect force-systemd-env-085980 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 11:41:32.476670 3167525 cli_runner.go:211] docker network inspect force-systemd-env-085980 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 11:41:32.476757 3167525 network_create.go:284] running [docker network inspect force-systemd-env-085980] to gather additional debugging logs...
	I1217 11:41:32.476789 3167525 cli_runner.go:164] Run: docker network inspect force-systemd-env-085980
	W1217 11:41:32.493396 3167525 cli_runner.go:211] docker network inspect force-systemd-env-085980 returned with exit code 1
	I1217 11:41:32.493428 3167525 network_create.go:287] error running [docker network inspect force-systemd-env-085980]: docker network inspect force-systemd-env-085980: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-085980 not found
	I1217 11:41:32.493444 3167525 network_create.go:289] output of [docker network inspect force-systemd-env-085980]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-085980 not found
	
	** /stderr **
	I1217 11:41:32.493547 3167525 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:41:32.511247 3167525 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f429477a79c4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:ea:a9:f2:52:01} reservation:<nil>}
	I1217 11:41:32.511564 3167525 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e0545776686c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:70:9e:49:ed:7d} reservation:<nil>}
	I1217 11:41:32.511883 3167525 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-279becfad84b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:b7:62:6e:a9:ee} reservation:<nil>}
	I1217 11:41:32.512225 3167525 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d68d6e30f6d4 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ca:93:f2:4e:2e:89} reservation:<nil>}
	I1217 11:41:32.512729 3167525 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019fd710}
	I1217 11:41:32.512757 3167525 network_create.go:124] attempt to create docker network force-systemd-env-085980 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1217 11:41:32.512827 3167525 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-085980 force-systemd-env-085980
	I1217 11:41:32.578656 3167525 network_create.go:108] docker network force-systemd-env-085980 192.168.85.0/24 created
	I1217 11:41:32.578691 3167525 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-085980" container
	I1217 11:41:32.578792 3167525 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 11:41:32.595991 3167525 cli_runner.go:164] Run: docker volume create force-systemd-env-085980 --label name.minikube.sigs.k8s.io=force-systemd-env-085980 --label created_by.minikube.sigs.k8s.io=true
	I1217 11:41:32.615227 3167525 oci.go:103] Successfully created a docker volume force-systemd-env-085980
	I1217 11:41:32.615324 3167525 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-085980-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-085980 --entrypoint /usr/bin/test -v force-systemd-env-085980:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 11:41:33.187818 3167525 oci.go:107] Successfully prepared a docker volume force-systemd-env-085980
	I1217 11:41:33.187897 3167525 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1217 11:41:33.187916 3167525 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 11:41:33.187983 3167525 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-085980:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 11:41:37.336686 3167525 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-085980:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.148661952s)
	I1217 11:41:37.336721 3167525 kic.go:203] duration metric: took 4.148804235s to extract preloaded images to volume ...
	W1217 11:41:37.336856 3167525 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 11:41:37.336968 3167525 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 11:41:37.388190 3167525 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-085980 --name force-systemd-env-085980 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-085980 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-085980 --network force-systemd-env-085980 --ip 192.168.85.2 --volume force-systemd-env-085980:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 11:41:37.712888 3167525 cli_runner.go:164] Run: docker container inspect force-systemd-env-085980 --format={{.State.Running}}
	I1217 11:41:37.739939 3167525 cli_runner.go:164] Run: docker container inspect force-systemd-env-085980 --format={{.State.Status}}
	I1217 11:41:37.763038 3167525 cli_runner.go:164] Run: docker exec force-systemd-env-085980 stat /var/lib/dpkg/alternatives/iptables
	I1217 11:41:37.810534 3167525 oci.go:144] the created container "force-systemd-env-085980" has a running status.
	I1217 11:41:37.810593 3167525 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/force-systemd-env-085980/id_ed25519...
	I1217 11:41:37.817395 3167525 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/force-systemd-env-085980/id_ed25519.pub -> /home/docker/.ssh/authorized_keys
	I1217 11:41:37.817448 3167525 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/force-systemd-env-085980/id_ed25519.pub --> /home/docker/.ssh/authorized_keys (81 bytes)
	I1217 11:41:37.846030 3167525 cli_runner.go:164] Run: docker container inspect force-systemd-env-085980 --format={{.State.Status}}
	I1217 11:41:37.865760 3167525 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 11:41:37.865782 3167525 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-085980 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 11:41:37.923224 3167525 cli_runner.go:164] Run: docker container inspect force-systemd-env-085980 --format={{.State.Status}}
	I1217 11:41:37.945627 3167525 machine.go:94] provisionDockerMachine start ...
	I1217 11:41:37.945726 3167525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-085980
	I1217 11:41:37.973674 3167525 main.go:143] libmachine: Using SSH client type: native
	I1217 11:41:37.973799 3167525 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35993 <nil> <nil>}
	I1217 11:41:37.973814 3167525 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:41:37.974469 3167525 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 11:41:41.112116 3167525 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-085980
	
	I1217 11:41:41.112141 3167525 ubuntu.go:182] provisioning hostname "force-systemd-env-085980"
	I1217 11:41:41.112214 3167525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-085980
	I1217 11:41:41.130020 3167525 main.go:143] libmachine: Using SSH client type: native
	I1217 11:41:41.130131 3167525 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35993 <nil> <nil>}
	I1217 11:41:41.130148 3167525 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-085980 && echo "force-systemd-env-085980" | sudo tee /etc/hostname
	I1217 11:41:41.270490 3167525 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-085980
	
	I1217 11:41:41.270588 3167525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-085980
	I1217 11:41:41.288306 3167525 main.go:143] libmachine: Using SSH client type: native
	I1217 11:41:41.288474 3167525 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 35993 <nil> <nil>}
	I1217 11:41:41.288498 3167525 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-085980' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-085980/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-085980' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:41:41.424602 3167525 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:41:41.424632 3167525 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 11:41:41.424678 3167525 ubuntu.go:190] setting up certificates
	I1217 11:41:41.424693 3167525 provision.go:84] configureAuth start
	I1217 11:41:41.424773 3167525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-085980
	I1217 11:41:41.441318 3167525 provision.go:143] copyHostCerts
	I1217 11:41:41.441362 3167525 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 11:41:41.441399 3167525 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 11:41:41.441412 3167525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 11:41:41.441491 3167525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 11:41:41.441583 3167525 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 11:41:41.441613 3167525 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 11:41:41.441621 3167525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 11:41:41.441652 3167525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 11:41:41.441706 3167525 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 11:41:41.441727 3167525 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 11:41:41.441732 3167525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 11:41:41.441757 3167525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 11:41:41.441809 3167525 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-085980 san=[127.0.0.1 192.168.85.2 force-systemd-env-085980 localhost minikube]
	I1217 11:41:41.548102 3167525 provision.go:177] copyRemoteCerts
	I1217 11:41:41.548171 3167525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:41:41.548215 3167525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-085980
	I1217 11:41:41.565493 3167525 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35993 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/force-systemd-env-085980/id_ed25519 Username:docker}
	I1217 11:41:41.660245 3167525 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 11:41:41.660319 3167525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 11:41:41.678300 3167525 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 11:41:41.678362 3167525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1217 11:41:41.696660 3167525 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 11:41:41.696779 3167525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 11:41:41.715304 3167525 provision.go:87] duration metric: took 290.576343ms to configureAuth
	I1217 11:41:41.715334 3167525 ubuntu.go:206] setting minikube options for container-runtime
	I1217 11:41:41.715518 3167525 config.go:182] Loaded profile config "force-systemd-env-085980": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1217 11:41:41.715532 3167525 machine.go:97] duration metric: took 3.769886277s to provisionDockerMachine
	I1217 11:41:41.715539 3167525 client.go:176] duration metric: took 9.259247262s to LocalClient.Create
	I1217 11:41:41.715560 3167525 start.go:167] duration metric: took 9.259325085s to libmachine.API.Create "force-systemd-env-085980"
	I1217 11:41:41.715574 3167525 start.go:293] postStartSetup for "force-systemd-env-085980" (driver="docker")
	I1217 11:41:41.715583 3167525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 11:41:41.715640 3167525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 11:41:41.715693 3167525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-085980
	I1217 11:41:41.732878 3167525 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35993 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/force-systemd-env-085980/id_ed25519 Username:docker}
	I1217 11:41:41.828468 3167525 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 11:41:41.831924 3167525 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 11:41:41.831952 3167525 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 11:41:41.831964 3167525 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 11:41:41.832017 3167525 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 11:41:41.832100 3167525 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 11:41:41.832110 3167525 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> /etc/ssl/certs/29245742.pem
	I1217 11:41:41.832212 3167525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 11:41:41.839602 3167525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 11:41:41.857232 3167525 start.go:296] duration metric: took 141.643826ms for postStartSetup
	I1217 11:41:41.857590 3167525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-085980
	I1217 11:41:41.873759 3167525 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/config.json ...
	I1217 11:41:41.874041 3167525 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:41:41.874100 3167525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-085980
	I1217 11:41:41.890553 3167525 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35993 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/force-systemd-env-085980/id_ed25519 Username:docker}
	I1217 11:41:41.981347 3167525 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 11:41:41.986171 3167525 start.go:128] duration metric: took 9.533645978s to createHost
	I1217 11:41:41.986193 3167525 start.go:83] releasing machines lock for "force-systemd-env-085980", held for 9.533810733s
	I1217 11:41:41.986264 3167525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-085980
	I1217 11:41:42.008995 3167525 ssh_runner.go:195] Run: cat /version.json
	I1217 11:41:42.009054 3167525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-085980
	I1217 11:41:42.009646 3167525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 11:41:42.009717 3167525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-085980
	I1217 11:41:42.034843 3167525 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35993 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/force-systemd-env-085980/id_ed25519 Username:docker}
	I1217 11:41:42.044543 3167525 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35993 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/force-systemd-env-085980/id_ed25519 Username:docker}
	I1217 11:41:42.148641 3167525 ssh_runner.go:195] Run: systemctl --version
	I1217 11:41:42.244892 3167525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 11:41:42.250348 3167525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 11:41:42.250438 3167525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 11:41:42.286178 3167525 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 11:41:42.286223 3167525 start.go:496] detecting cgroup driver to use...
	I1217 11:41:42.286242 3167525 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1217 11:41:42.286321 3167525 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 11:41:42.303844 3167525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 11:41:42.318806 3167525 docker.go:218] disabling cri-docker service (if available) ...
	I1217 11:41:42.318920 3167525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 11:41:42.338561 3167525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 11:41:42.358885 3167525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 11:41:42.482055 3167525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 11:41:42.633945 3167525 docker.go:234] disabling docker service ...
	I1217 11:41:42.634035 3167525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 11:41:42.655030 3167525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 11:41:42.669227 3167525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 11:41:42.797424 3167525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 11:41:42.924193 3167525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 11:41:42.937812 3167525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 11:41:42.952675 3167525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 11:41:42.961891 3167525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 11:41:42.970846 3167525 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1217 11:41:42.970913 3167525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1217 11:41:42.979628 3167525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 11:41:42.988531 3167525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 11:41:42.997061 3167525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 11:41:43.008343 3167525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 11:41:43.017845 3167525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 11:41:43.027407 3167525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 11:41:43.037011 3167525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 11:41:43.046527 3167525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 11:41:43.054272 3167525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 11:41:43.061920 3167525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:41:43.179033 3167525 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 11:41:43.343169 3167525 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 11:41:43.343252 3167525 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 11:41:43.347210 3167525 start.go:564] Will wait 60s for crictl version
	I1217 11:41:43.347281 3167525 ssh_runner.go:195] Run: which crictl
	I1217 11:41:43.350981 3167525 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 11:41:43.377001 3167525 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 11:41:43.377133 3167525 ssh_runner.go:195] Run: containerd --version
	I1217 11:41:43.401527 3167525 ssh_runner.go:195] Run: containerd --version
	I1217 11:41:43.428586 3167525 out.go:179] * Preparing Kubernetes v1.34.3 on containerd 2.2.0 ...
	I1217 11:41:43.431702 3167525 cli_runner.go:164] Run: docker network inspect force-systemd-env-085980 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:41:43.447476 3167525 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 11:41:43.451307 3167525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:41:43.460622 3167525 kubeadm.go:884] updating cluster {Name:force-systemd-env-085980 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:force-systemd-env-085980 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:41:43.460740 3167525 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1217 11:41:43.460813 3167525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:41:43.484191 3167525 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 11:41:43.484214 3167525 containerd.go:534] Images already preloaded, skipping extraction
	I1217 11:41:43.484280 3167525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:41:43.510337 3167525 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 11:41:43.510412 3167525 cache_images.go:86] Images are preloaded, skipping loading
	I1217 11:41:43.510429 3167525 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.3 containerd true true} ...
	I1217 11:41:43.510528 3167525 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-085980 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:force-systemd-env-085980 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 11:41:43.510602 3167525 ssh_runner.go:195] Run: sudo crictl info
	I1217 11:41:43.536712 3167525 cni.go:84] Creating CNI manager for ""
	I1217 11:41:43.536734 3167525 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 11:41:43.536752 3167525 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 11:41:43.536775 3167525 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-085980 NodeName:force-systemd-env-085980 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:41:43.536888 3167525 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "force-systemd-env-085980"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:41:43.536955 3167525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 11:41:43.545182 3167525 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 11:41:43.545313 3167525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:41:43.553111 3167525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1217 11:41:43.566295 3167525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 11:41:43.580324 3167525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2236 bytes)
	I1217 11:41:43.593723 3167525 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:41:43.597771 3167525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:41:43.608184 3167525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:41:43.726262 3167525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:41:43.742992 3167525 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980 for IP: 192.168.85.2
	I1217 11:41:43.743017 3167525 certs.go:195] generating shared ca certs ...
	I1217 11:41:43.743034 3167525 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:41:43.743226 3167525 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 11:41:43.743301 3167525 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 11:41:43.743317 3167525 certs.go:257] generating profile certs ...
	I1217 11:41:43.743391 3167525 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/client.key
	I1217 11:41:43.743409 3167525 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/client.crt with IP's: []
	I1217 11:41:44.119909 3167525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/client.crt ...
	I1217 11:41:44.119944 3167525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/client.crt: {Name:mk91cbf9b1993d7b000dc4539185b52086c474a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:41:44.120186 3167525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/client.key ...
	I1217 11:41:44.120205 3167525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/client.key: {Name:mk2ad33ee0234b88279147339f1c52539ba1cd93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:41:44.120325 3167525 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/apiserver.key.7d835ec1
	I1217 11:41:44.120346 3167525 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/apiserver.crt.7d835ec1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1217 11:41:44.543463 3167525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/apiserver.crt.7d835ec1 ...
	I1217 11:41:44.543499 3167525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/apiserver.crt.7d835ec1: {Name:mk75d0d85e1e1da9112f0b735483bb202177803b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:41:44.543722 3167525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/apiserver.key.7d835ec1 ...
	I1217 11:41:44.543740 3167525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/apiserver.key.7d835ec1: {Name:mk99cd2bec1bbe4947a5d5ac27cb0a5bf5c1620d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:41:44.543854 3167525 certs.go:382] copying /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/apiserver.crt.7d835ec1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/apiserver.crt
	I1217 11:41:44.543940 3167525 certs.go:386] copying /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/apiserver.key.7d835ec1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/apiserver.key
	I1217 11:41:44.544001 3167525 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/proxy-client.key
	I1217 11:41:44.544023 3167525 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/proxy-client.crt with IP's: []
	I1217 11:41:44.665397 3167525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/proxy-client.crt ...
	I1217 11:41:44.665433 3167525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/proxy-client.crt: {Name:mk7ccf90348b2753eb78a22020475fb74509348c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:41:44.665625 3167525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/proxy-client.key ...
	I1217 11:41:44.665643 3167525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/proxy-client.key: {Name:mk4bf96c4c28b19f424f7f15d2f75af352f3cf18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:41:44.665722 3167525 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 11:41:44.665740 3167525 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 11:41:44.665753 3167525 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 11:41:44.665775 3167525 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 11:41:44.665791 3167525 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 11:41:44.665814 3167525 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 11:41:44.665826 3167525 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 11:41:44.665837 3167525 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 11:41:44.665910 3167525 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 11:41:44.665954 3167525 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 11:41:44.665968 3167525 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:41:44.666007 3167525 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 11:41:44.666035 3167525 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:41:44.666063 3167525 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 11:41:44.666112 3167525 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 11:41:44.666150 3167525 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem -> /usr/share/ca-certificates/2924574.pem
	I1217 11:41:44.666168 3167525 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> /usr/share/ca-certificates/29245742.pem
	I1217 11:41:44.666183 3167525 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:41:44.666737 3167525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:41:44.686331 3167525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:41:44.704535 3167525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:41:44.722860 3167525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 11:41:44.741086 3167525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1217 11:41:44.772430 3167525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 11:41:44.794467 3167525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:41:44.814649 3167525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/force-systemd-env-085980/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 11:41:44.836645 3167525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 11:41:44.855040 3167525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 11:41:44.872556 3167525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:41:44.891095 3167525 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:41:44.904250 3167525 ssh_runner.go:195] Run: openssl version
	I1217 11:41:44.911642 3167525 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:41:44.920946 3167525 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:41:44.929021 3167525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:41:44.933115 3167525 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:41:44.933209 3167525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:41:44.976747 3167525 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:41:44.984559 3167525 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 11:41:44.992340 3167525 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 11:41:45.016970 3167525 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 11:41:45.056129 3167525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 11:41:45.069890 3167525 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 11:41:45.070048 3167525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 11:41:45.142100 3167525 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:41:45.155656 3167525 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2924574.pem /etc/ssl/certs/51391683.0
	I1217 11:41:45.170740 3167525 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 11:41:45.184404 3167525 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 11:41:45.200777 3167525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 11:41:45.206711 3167525 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 11:41:45.206994 3167525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 11:41:45.302843 3167525 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:41:45.314129 3167525 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/29245742.pem /etc/ssl/certs/3ec20f2e.0
	I1217 11:41:45.323938 3167525 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:41:45.329012 3167525 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 11:41:45.329070 3167525 kubeadm.go:401] StartCluster: {Name:force-systemd-env-085980 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:force-systemd-env-085980 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:41:45.329156 3167525 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 11:41:45.329236 3167525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:41:45.363495 3167525 cri.go:89] found id: ""
	I1217 11:41:45.363578 3167525 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:41:45.371906 3167525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 11:41:45.380368 3167525 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 11:41:45.380465 3167525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:41:45.389255 3167525 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 11:41:45.389280 3167525 kubeadm.go:158] found existing configuration files:
	
	I1217 11:41:45.389338 3167525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 11:41:45.398301 3167525 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 11:41:45.398374 3167525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 11:41:45.406651 3167525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 11:41:45.416204 3167525 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 11:41:45.416333 3167525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 11:41:45.426164 3167525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 11:41:45.434525 3167525 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 11:41:45.434616 3167525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:41:45.442552 3167525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 11:41:45.450707 3167525 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 11:41:45.450804 3167525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:41:45.458646 3167525 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 11:41:45.497695 3167525 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 11:41:45.497920 3167525 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 11:41:45.545443 3167525 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 11:41:45.545521 3167525 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 11:41:45.545559 3167525 kubeadm.go:319] OS: Linux
	I1217 11:41:45.545609 3167525 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 11:41:45.545661 3167525 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 11:41:45.545713 3167525 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 11:41:45.545766 3167525 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 11:41:45.545818 3167525 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 11:41:45.545875 3167525 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 11:41:45.545925 3167525 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 11:41:45.545977 3167525 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 11:41:45.546028 3167525 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 11:41:45.621065 3167525 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 11:41:45.621180 3167525 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 11:41:45.621276 3167525 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 11:41:45.627039 3167525 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 11:41:45.995011 3121455 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000850203s
	I1217 11:41:45.995170 3121455 kubeadm.go:319] 
	I1217 11:41:45.995243 3121455 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 11:41:45.995283 3121455 kubeadm.go:319] 	- The kubelet is not running
	I1217 11:41:45.995388 3121455 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 11:41:45.995397 3121455 kubeadm.go:319] 
	I1217 11:41:45.995500 3121455 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 11:41:45.995537 3121455 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 11:41:45.995572 3121455 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 11:41:45.995580 3121455 kubeadm.go:319] 
	I1217 11:41:46.000737 3121455 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 11:41:46.001172 3121455 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 11:41:46.001287 3121455 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:41:46.001556 3121455 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 11:41:46.001565 3121455 kubeadm.go:319] 
	I1217 11:41:46.001636 3121455 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 11:41:46.001706 3121455 kubeadm.go:403] duration metric: took 12m16.86095773s to StartCluster
	I1217 11:41:46.001751 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:41:46.001843 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:41:46.043296 3121455 cri.go:89] found id: ""
	I1217 11:41:46.043321 3121455 logs.go:282] 0 containers: []
	W1217 11:41:46.043331 3121455 logs.go:284] No container was found matching "kube-apiserver"
	I1217 11:41:46.043338 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:41:46.043398 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:41:46.084074 3121455 cri.go:89] found id: ""
	I1217 11:41:46.084099 3121455 logs.go:282] 0 containers: []
	W1217 11:41:46.084108 3121455 logs.go:284] No container was found matching "etcd"
	I1217 11:41:46.084114 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:41:46.084181 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:41:46.119857 3121455 cri.go:89] found id: ""
	I1217 11:41:46.119883 3121455 logs.go:282] 0 containers: []
	W1217 11:41:46.119892 3121455 logs.go:284] No container was found matching "coredns"
	I1217 11:41:46.119898 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:41:46.119960 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:41:46.153180 3121455 cri.go:89] found id: ""
	I1217 11:41:46.153204 3121455 logs.go:282] 0 containers: []
	W1217 11:41:46.153212 3121455 logs.go:284] No container was found matching "kube-scheduler"
	I1217 11:41:46.153218 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:41:46.153276 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:41:46.187021 3121455 cri.go:89] found id: ""
	I1217 11:41:46.187043 3121455 logs.go:282] 0 containers: []
	W1217 11:41:46.187052 3121455 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:41:46.187059 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:41:46.187118 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:41:46.217732 3121455 cri.go:89] found id: ""
	I1217 11:41:46.217753 3121455 logs.go:282] 0 containers: []
	W1217 11:41:46.217762 3121455 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 11:41:46.217769 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:41:46.217830 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:41:46.263035 3121455 cri.go:89] found id: ""
	I1217 11:41:46.263056 3121455 logs.go:282] 0 containers: []
	W1217 11:41:46.263065 3121455 logs.go:284] No container was found matching "kindnet"
	I1217 11:41:46.263071 3121455 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1217 11:41:46.263129 3121455 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 11:41:46.304678 3121455 cri.go:89] found id: ""
	I1217 11:41:46.304754 3121455 logs.go:282] 0 containers: []
	W1217 11:41:46.304800 3121455 logs.go:284] No container was found matching "storage-provisioner"
	I1217 11:41:46.304825 3121455 logs.go:123] Gathering logs for dmesg ...
	I1217 11:41:46.304853 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:41:46.327931 3121455 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:41:46.328010 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:41:46.434272 3121455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:41:46.434335 3121455 logs.go:123] Gathering logs for containerd ...
	I1217 11:41:46.434363 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:41:46.485380 3121455 logs.go:123] Gathering logs for container status ...
	I1217 11:41:46.485460 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:41:46.520267 3121455 logs.go:123] Gathering logs for kubelet ...
	I1217 11:41:46.520300 3121455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 11:41:46.584108 3121455 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000850203s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 11:41:46.584222 3121455 out.go:285] * 
	W1217 11:41:46.584491 3121455 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000850203s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 11:41:46.584552 3121455 out.go:285] * 
	W1217 11:41:46.586816 3121455 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:41:46.592433 3121455 out.go:203] 
	W1217 11:41:46.596188 3121455 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000850203s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 11:41:46.596583 3121455 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 11:41:46.596697 3121455 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 11:41:46.601285 3121455 out.go:203] 
	I1217 11:41:45.634071 3167525 out.go:252]   - Generating certificates and keys ...
	I1217 11:41:45.634221 3167525 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 11:41:45.634328 3167525 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 11:41:45.770301 3167525 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 11:41:47.212106 3167525 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 11:33:39 kubernetes-upgrade-452067 containerd[555]: time="2025-12-17T11:33:39.274445306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:33:39 kubernetes-upgrade-452067 containerd[555]: time="2025-12-17T11:33:39.275582674Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.519518636s"
	Dec 17 11:33:39 kubernetes-upgrade-452067 containerd[555]: time="2025-12-17T11:33:39.275628113Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\""
	Dec 17 11:33:39 kubernetes-upgrade-452067 containerd[555]: time="2025-12-17T11:33:39.277217753Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\""
	Dec 17 11:33:39 kubernetes-upgrade-452067 containerd[555]: time="2025-12-17T11:33:39.931773388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 17 11:33:39 kubernetes-upgrade-452067 containerd[555]: time="2025-12-17T11:33:39.933782063Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709"
	Dec 17 11:33:39 kubernetes-upgrade-452067 containerd[555]: time="2025-12-17T11:33:39.936138060Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 17 11:33:39 kubernetes-upgrade-452067 containerd[555]: time="2025-12-17T11:33:39.939609429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 17 11:33:39 kubernetes-upgrade-452067 containerd[555]: time="2025-12-17T11:33:39.940497965Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 663.244594ms"
	Dec 17 11:33:39 kubernetes-upgrade-452067 containerd[555]: time="2025-12-17T11:33:39.940555088Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\""
	Dec 17 11:33:39 kubernetes-upgrade-452067 containerd[555]: time="2025-12-17T11:33:39.941504358Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\""
	Dec 17 11:33:41 kubernetes-upgrade-452067 containerd[555]: time="2025-12-17T11:33:41.485296980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:33:41 kubernetes-upgrade-452067 containerd[555]: time="2025-12-17T11:33:41.487110927Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=21753021"
	Dec 17 11:33:41 kubernetes-upgrade-452067 containerd[555]: time="2025-12-17T11:33:41.489745990Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:33:41 kubernetes-upgrade-452067 containerd[555]: time="2025-12-17T11:33:41.494084677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:33:41 kubernetes-upgrade-452067 containerd[555]: time="2025-12-17T11:33:41.495518760Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"21749640\" in 1.553892494s"
	Dec 17 11:33:41 kubernetes-upgrade-452067 containerd[555]: time="2025-12-17T11:33:41.495568499Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\""
	Dec 17 11:38:31 kubernetes-upgrade-452067 containerd[555]: time="2025-12-17T11:38:31.855800151Z" level=info msg="container event discarded" container=d7a7f0f559aa3927921a2dbf356c89b14d37390f9d30816024b3a947653bf770 type=CONTAINER_DELETED_EVENT
	Dec 17 11:38:31 kubernetes-upgrade-452067 containerd[555]: time="2025-12-17T11:38:31.870114183Z" level=info msg="container event discarded" container=984856d948ad5f038b8e29c40b8e246fa483bd7dfb2ab269e509ec208ad82f5c type=CONTAINER_DELETED_EVENT
	Dec 17 11:38:31 kubernetes-upgrade-452067 containerd[555]: time="2025-12-17T11:38:31.883405979Z" level=info msg="container event discarded" container=09acdaa5fe0ea0d3695468746051a4359bd1483c119953e52104300e6161af43 type=CONTAINER_DELETED_EVENT
	Dec 17 11:38:31 kubernetes-upgrade-452067 containerd[555]: time="2025-12-17T11:38:31.883456235Z" level=info msg="container event discarded" container=14b965fe60bbdbe0643474303045ba2136dc066964ba34076a1aa8832d037169 type=CONTAINER_DELETED_EVENT
	Dec 17 11:38:31 kubernetes-upgrade-452067 containerd[555]: time="2025-12-17T11:38:31.899828852Z" level=info msg="container event discarded" container=8fa1273b07a543f3a25cd3a9a57e3dc8a3cbd95d200f98f460dac6b2962ca249 type=CONTAINER_DELETED_EVENT
	Dec 17 11:38:31 kubernetes-upgrade-452067 containerd[555]: time="2025-12-17T11:38:31.899892358Z" level=info msg="container event discarded" container=325379dad4129572c889bc74ca1084dc25400e957fcadfd45db895a3edbad870 type=CONTAINER_DELETED_EVENT
	Dec 17 11:38:31 kubernetes-upgrade-452067 containerd[555]: time="2025-12-17T11:38:31.919086866Z" level=info msg="container event discarded" container=a9e29f10e429452ede75b89cb63cfdd052300dc48e1d7df9225ed6aab0e6fb7e type=CONTAINER_DELETED_EVENT
	Dec 17 11:38:31 kubernetes-upgrade-452067 containerd[555]: time="2025-12-17T11:38:31.919144407Z" level=info msg="container event discarded" container=9024b9ea78870ac7feef48b0fd465cc442d3c78a5ff16708e686511105d3042b type=CONTAINER_DELETED_EVENT
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +26.532481] overlayfs: idmapped layers are currently not supported
	[Dec17 09:26] overlayfs: idmapped layers are currently not supported
	[Dec17 09:27] overlayfs: idmapped layers are currently not supported
	[Dec17 09:29] overlayfs: idmapped layers are currently not supported
	[Dec17 09:31] overlayfs: idmapped layers are currently not supported
	[Dec17 09:41] overlayfs: idmapped layers are currently not supported
	[Dec17 09:43] overlayfs: idmapped layers are currently not supported
	[Dec17 09:44] overlayfs: idmapped layers are currently not supported
	[  +5.066669] overlayfs: idmapped layers are currently not supported
	[ +38.827173] overlayfs: idmapped layers are currently not supported
	[Dec17 09:45] overlayfs: idmapped layers are currently not supported
	[Dec17 09:46] overlayfs: idmapped layers are currently not supported
	[Dec17 09:48] overlayfs: idmapped layers are currently not supported
	[  +5.468161] overlayfs: idmapped layers are currently not supported
	[Dec17 09:49] overlayfs: idmapped layers are currently not supported
	[  +4.263444] overlayfs: idmapped layers are currently not supported
	[Dec17 09:50] overlayfs: idmapped layers are currently not supported
	[Dec17 10:07] overlayfs: idmapped layers are currently not supported
	[Dec17 10:08] overlayfs: idmapped layers are currently not supported
	[Dec17 10:10] overlayfs: idmapped layers are currently not supported
	[Dec17 10:11] overlayfs: idmapped layers are currently not supported
	[Dec17 10:13] overlayfs: idmapped layers are currently not supported
	[Dec17 10:15] overlayfs: idmapped layers are currently not supported
	[Dec17 10:16] overlayfs: idmapped layers are currently not supported
	[Dec17 10:21] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 11:41:49 up 17:24,  0 user,  load average: 2.51, 2.05, 2.01
	Linux kubernetes-upgrade-452067 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 11:41:45 kubernetes-upgrade-452067 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:41:46 kubernetes-upgrade-452067 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 17 11:41:46 kubernetes-upgrade-452067 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:41:46 kubernetes-upgrade-452067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:41:46 kubernetes-upgrade-452067 kubelet[14387]: E1217 11:41:46.385017   14387 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:41:46 kubernetes-upgrade-452067 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:41:46 kubernetes-upgrade-452067 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:41:46 kubernetes-upgrade-452067 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 17 11:41:46 kubernetes-upgrade-452067 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:41:47 kubernetes-upgrade-452067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:41:47 kubernetes-upgrade-452067 kubelet[14426]: E1217 11:41:47.142814   14426 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:41:47 kubernetes-upgrade-452067 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:41:47 kubernetes-upgrade-452067 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:41:47 kubernetes-upgrade-452067 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 17 11:41:47 kubernetes-upgrade-452067 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:41:48 kubernetes-upgrade-452067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:41:48 kubernetes-upgrade-452067 kubelet[14431]: E1217 11:41:48.149637   14431 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:41:48 kubernetes-upgrade-452067 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:41:48 kubernetes-upgrade-452067 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:41:48 kubernetes-upgrade-452067 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 17 11:41:48 kubernetes-upgrade-452067 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:41:49 kubernetes-upgrade-452067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:41:49 kubernetes-upgrade-452067 kubelet[14460]: E1217 11:41:49.082985   14460 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:41:49 kubernetes-upgrade-452067 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:41:49 kubernetes-upgrade-452067 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-452067 -n kubernetes-upgrade-452067
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-452067 -n kubernetes-upgrade-452067: exit status 2 (537.802344ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-452067" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-452067" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-452067
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-452067: (2.402301294s)
--- FAIL: TestKubernetesUpgrade (798.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (514.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-118262 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-118262 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 109 (8m32.482041388s)

                                                
                                                
-- stdout --
	* [no-preload-118262] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "no-preload-118262" primary control-plane node in "no-preload-118262" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:45:22.700339 3184285 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:45:22.700497 3184285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:45:22.700511 3184285 out.go:374] Setting ErrFile to fd 2...
	I1217 11:45:22.700518 3184285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:45:22.700778 3184285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 11:45:22.701211 3184285 out.go:368] Setting JSON to false
	I1217 11:45:22.702147 3184285 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":62873,"bootTime":1765909050,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 11:45:22.702221 3184285 start.go:143] virtualization:  
	I1217 11:45:22.706608 3184285 out.go:179] * [no-preload-118262] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 11:45:22.710204 3184285 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 11:45:22.710277 3184285 notify.go:221] Checking for updates...
	I1217 11:45:22.716861 3184285 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:45:22.720069 3184285 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 11:45:22.723291 3184285 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 11:45:22.726386 3184285 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 11:45:22.729551 3184285 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:45:22.733222 3184285 config.go:182] Loaded profile config "cert-expiration-182607": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1217 11:45:22.733336 3184285 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:45:22.755784 3184285 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 11:45:22.755911 3184285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:45:22.819336 3184285 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:45:22.809737105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:45:22.819440 3184285 docker.go:319] overlay module found
	I1217 11:45:22.822905 3184285 out.go:179] * Using the docker driver based on user configuration
	I1217 11:45:22.825971 3184285 start.go:309] selected driver: docker
	I1217 11:45:22.825990 3184285 start.go:927] validating driver "docker" against <nil>
	I1217 11:45:22.826004 3184285 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:45:22.826691 3184285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:45:22.889231 3184285 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:45:22.873474477 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:45:22.889412 3184285 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 11:45:22.889642 3184285 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:45:22.892752 3184285 out.go:179] * Using Docker driver with root privileges
	I1217 11:45:22.896616 3184285 cni.go:84] Creating CNI manager for ""
	I1217 11:45:22.896686 3184285 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 11:45:22.896701 3184285 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 11:45:22.896803 3184285 start.go:353] cluster config:
	{Name:no-preload-118262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-118262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:45:22.899885 3184285 out.go:179] * Starting "no-preload-118262" primary control-plane node in "no-preload-118262" cluster
	I1217 11:45:22.902669 3184285 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 11:45:22.905523 3184285 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:45:22.908523 3184285 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 11:45:22.908714 3184285 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/config.json ...
	I1217 11:45:22.908764 3184285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/config.json: {Name:mkfe3920055c4bdb54d0017ffc9b4511c52cc50d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:45:22.908576 3184285 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:45:22.909221 3184285 cache.go:107] acquiring lock: {Name:mk815fc0c67b76ed2ee0b075f6917d43e67b13d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:45:22.909288 3184285 cache.go:115] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1217 11:45:22.909301 3184285 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 98.066µs
	I1217 11:45:22.909313 3184285 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1217 11:45:22.909328 3184285 cache.go:107] acquiring lock: {Name:mk11644c35fa0d35fcf9d5a865af6c28a7df16d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:45:22.909403 3184285 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:45:22.909580 3184285 cache.go:107] acquiring lock: {Name:mk02712d952db0244ab56f62810e58a983831503 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:45:22.909676 3184285 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 11:45:22.909772 3184285 cache.go:107] acquiring lock: {Name:mk436387f099b91bd6762b69e3678ebc0f9561cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:45:22.909839 3184285 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:45:22.909926 3184285 cache.go:107] acquiring lock: {Name:mkf4cd732ad0857bbeaf7d91402ed78da15112e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:45:22.909998 3184285 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 11:45:22.910097 3184285 cache.go:107] acquiring lock: {Name:mka934c06f25efbc149ef4769eaae5adad4ea53a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:45:22.910149 3184285 cache.go:115] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1217 11:45:22.910161 3184285 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 67.313µs
	I1217 11:45:22.910178 3184285 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1217 11:45:22.910189 3184285 cache.go:107] acquiring lock: {Name:mkb53641077bc34de612e9b78566264ac82d9b73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:45:22.910252 3184285 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1217 11:45:22.910344 3184285 cache.go:107] acquiring lock: {Name:mkca0a51840ba852f371cde8bcc41ec807c30a00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:45:22.910418 3184285 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 11:45:22.913069 3184285 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1217 11:45:22.913460 3184285 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 11:45:22.913611 3184285 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:45:22.913739 3184285 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:45:22.913871 3184285 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 11:45:22.914077 3184285 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 11:45:22.933754 3184285 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:45:22.933779 3184285 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 11:45:22.933800 3184285 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:45:22.933844 3184285 start.go:360] acquireMachinesLock for no-preload-118262: {Name:mka8b15d744256405cc79d3bb936a81c229c3b9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:45:22.933963 3184285 start.go:364] duration metric: took 101.249µs to acquireMachinesLock for "no-preload-118262"
	I1217 11:45:22.933995 3184285 start.go:93] Provisioning new machine with config: &{Name:no-preload-118262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-118262 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 11:45:22.934066 3184285 start.go:125] createHost starting for "" (driver="docker")
	I1217 11:45:22.937689 3184285 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 11:45:22.937968 3184285 start.go:159] libmachine.API.Create for "no-preload-118262" (driver="docker")
	I1217 11:45:22.938004 3184285 client.go:173] LocalClient.Create starting
	I1217 11:45:22.938081 3184285 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem
	I1217 11:45:22.938117 3184285 main.go:143] libmachine: Decoding PEM data...
	I1217 11:45:22.938132 3184285 main.go:143] libmachine: Parsing certificate...
	I1217 11:45:22.938179 3184285 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem
	I1217 11:45:22.938196 3184285 main.go:143] libmachine: Decoding PEM data...
	I1217 11:45:22.938207 3184285 main.go:143] libmachine: Parsing certificate...
	I1217 11:45:22.938602 3184285 cli_runner.go:164] Run: docker network inspect no-preload-118262 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 11:45:22.960626 3184285 cli_runner.go:211] docker network inspect no-preload-118262 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 11:45:22.960707 3184285 network_create.go:284] running [docker network inspect no-preload-118262] to gather additional debugging logs...
	I1217 11:45:22.960729 3184285 cli_runner.go:164] Run: docker network inspect no-preload-118262
	W1217 11:45:22.977763 3184285 cli_runner.go:211] docker network inspect no-preload-118262 returned with exit code 1
	I1217 11:45:22.977794 3184285 network_create.go:287] error running [docker network inspect no-preload-118262]: docker network inspect no-preload-118262: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-118262 not found
	I1217 11:45:22.977808 3184285 network_create.go:289] output of [docker network inspect no-preload-118262]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-118262 not found
	
	** /stderr **
	I1217 11:45:22.977919 3184285 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:45:22.994349 3184285 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f429477a79c4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:ea:a9:f2:52:01} reservation:<nil>}
	I1217 11:45:22.994670 3184285 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e0545776686c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:70:9e:49:ed:7d} reservation:<nil>}
	I1217 11:45:22.996129 3184285 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-279becfad84b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:b7:62:6e:a9:ee} reservation:<nil>}
	I1217 11:45:22.996484 3184285 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-05bd2a2b92b3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7e:94:ae:4b:d8:78} reservation:<nil>}
	I1217 11:45:22.997032 3184285 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c0b1c0}
	I1217 11:45:22.997089 3184285 network_create.go:124] attempt to create docker network no-preload-118262 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1217 11:45:22.997166 3184285 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-118262 no-preload-118262
	I1217 11:45:23.076289 3184285 network_create.go:108] docker network no-preload-118262 192.168.85.0/24 created
	I1217 11:45:23.076319 3184285 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-118262" container
	I1217 11:45:23.076449 3184285 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 11:45:23.093274 3184285 cli_runner.go:164] Run: docker volume create no-preload-118262 --label name.minikube.sigs.k8s.io=no-preload-118262 --label created_by.minikube.sigs.k8s.io=true
	I1217 11:45:23.111876 3184285 oci.go:103] Successfully created a docker volume no-preload-118262
	I1217 11:45:23.111961 3184285 cli_runner.go:164] Run: docker run --rm --name no-preload-118262-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-118262 --entrypoint /usr/bin/test -v no-preload-118262:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 11:45:23.226934 3184285 cache.go:162] opening:  /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1217 11:45:23.258650 3184285 cache.go:162] opening:  /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1217 11:45:23.275894 3184285 cache.go:162] opening:  /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1217 11:45:23.282798 3184285 cache.go:162] opening:  /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1217 11:45:23.291704 3184285 cache.go:162] opening:  /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1217 11:45:23.357653 3184285 cache.go:162] opening:  /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1217 11:45:23.703191 3184285 cache.go:157] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1217 11:45:23.703223 3184285 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 793.297246ms
	I1217 11:45:23.703240 3184285 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1217 11:45:23.810827 3184285 oci.go:107] Successfully prepared a docker volume no-preload-118262
	I1217 11:45:23.810878 3184285 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	W1217 11:45:23.811000 3184285 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 11:45:23.811109 3184285 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 11:45:23.875343 3184285 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-118262 --name no-preload-118262 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-118262 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-118262 --network no-preload-118262 --ip 192.168.85.2 --volume no-preload-118262:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 11:45:24.292497 3184285 cache.go:157] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1217 11:45:24.292581 3184285 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 1.383000861s
	I1217 11:45:24.292621 3184285 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1217 11:45:24.294779 3184285 cache.go:157] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1217 11:45:24.294851 3184285 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.38450732s
	I1217 11:45:24.294879 3184285 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1217 11:45:24.317402 3184285 cache.go:157] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1217 11:45:24.317428 3184285 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 1.40809991s
	I1217 11:45:24.317441 3184285 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1217 11:45:24.339070 3184285 cache.go:157] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1217 11:45:24.339151 3184285 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 1.429376633s
	I1217 11:45:24.339198 3184285 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1217 11:45:24.370157 3184285 cache.go:157] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1217 11:45:24.370188 3184285 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 1.459998685s
	I1217 11:45:24.370200 3184285 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1217 11:45:24.370231 3184285 cache.go:87] Successfully saved all images to host disk.
	I1217 11:45:24.378615 3184285 cli_runner.go:164] Run: docker container inspect no-preload-118262 --format={{.State.Running}}
	I1217 11:45:24.407091 3184285 cli_runner.go:164] Run: docker container inspect no-preload-118262 --format={{.State.Status}}
	I1217 11:45:24.433267 3184285 cli_runner.go:164] Run: docker exec no-preload-118262 stat /var/lib/dpkg/alternatives/iptables
	I1217 11:45:24.491186 3184285 oci.go:144] the created container "no-preload-118262" has a running status.
	I1217 11:45:24.491244 3184285 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519...
	I1217 11:45:24.495337 3184285 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519.pub --> /home/docker/.ssh/authorized_keys (81 bytes)
	I1217 11:45:24.526715 3184285 cli_runner.go:164] Run: docker container inspect no-preload-118262 --format={{.State.Status}}
	I1217 11:45:24.551536 3184285 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 11:45:24.551558 3184285 kic_runner.go:114] Args: [docker exec --privileged no-preload-118262 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 11:45:24.603119 3184285 cli_runner.go:164] Run: docker container inspect no-preload-118262 --format={{.State.Status}}
	I1217 11:45:24.628355 3184285 machine.go:94] provisionDockerMachine start ...
	I1217 11:45:24.628471 3184285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:45:24.650039 3184285 main.go:143] libmachine: Using SSH client type: native
	I1217 11:45:24.650165 3184285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36018 <nil> <nil>}
	I1217 11:45:24.650173 3184285 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:45:24.650923 3184285 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 11:45:27.788243 3184285 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-118262
	
	I1217 11:45:27.788267 3184285 ubuntu.go:182] provisioning hostname "no-preload-118262"
	I1217 11:45:27.788368 3184285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:45:27.808048 3184285 main.go:143] libmachine: Using SSH client type: native
	I1217 11:45:27.808165 3184285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36018 <nil> <nil>}
	I1217 11:45:27.808180 3184285 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-118262 && echo "no-preload-118262" | sudo tee /etc/hostname
	I1217 11:45:27.952253 3184285 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-118262
	
	I1217 11:45:27.952337 3184285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:45:27.976578 3184285 main.go:143] libmachine: Using SSH client type: native
	I1217 11:45:27.976682 3184285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36018 <nil> <nil>}
	I1217 11:45:27.976701 3184285 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-118262' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-118262/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-118262' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:45:28.122226 3184285 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:45:28.122250 3184285 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 11:45:28.122270 3184285 ubuntu.go:190] setting up certificates
	I1217 11:45:28.122281 3184285 provision.go:84] configureAuth start
	I1217 11:45:28.122342 3184285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-118262
	I1217 11:45:28.144299 3184285 provision.go:143] copyHostCerts
	I1217 11:45:28.144373 3184285 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 11:45:28.144382 3184285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 11:45:28.144497 3184285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 11:45:28.144612 3184285 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 11:45:28.144625 3184285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 11:45:28.144657 3184285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 11:45:28.144738 3184285 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 11:45:28.144747 3184285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 11:45:28.144772 3184285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 11:45:28.144836 3184285 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.no-preload-118262 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-118262]
	I1217 11:45:28.289515 3184285 provision.go:177] copyRemoteCerts
	I1217 11:45:28.289638 3184285 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:45:28.289709 3184285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:45:28.307229 3184285 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36018 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:45:28.413053 3184285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 11:45:28.434636 3184285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 11:45:28.454748 3184285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 11:45:28.473260 3184285 provision.go:87] duration metric: took 350.952474ms to configureAuth
	I1217 11:45:28.473329 3184285 ubuntu.go:206] setting minikube options for container-runtime
	I1217 11:45:28.473537 3184285 config.go:182] Loaded profile config "no-preload-118262": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:45:28.473553 3184285 machine.go:97] duration metric: took 3.845174958s to provisionDockerMachine
	I1217 11:45:28.473560 3184285 client.go:176] duration metric: took 5.535550452s to LocalClient.Create
	I1217 11:45:28.473595 3184285 start.go:167] duration metric: took 5.535621802s to libmachine.API.Create "no-preload-118262"
	I1217 11:45:28.473608 3184285 start.go:293] postStartSetup for "no-preload-118262" (driver="docker")
	I1217 11:45:28.473618 3184285 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 11:45:28.473679 3184285 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 11:45:28.473730 3184285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:45:28.490895 3184285 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36018 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:45:28.593040 3184285 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 11:45:28.596747 3184285 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 11:45:28.596833 3184285 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 11:45:28.596853 3184285 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 11:45:28.596933 3184285 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 11:45:28.597018 3184285 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 11:45:28.597131 3184285 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 11:45:28.604857 3184285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 11:45:28.623678 3184285 start.go:296] duration metric: took 150.05628ms for postStartSetup
	I1217 11:45:28.624117 3184285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-118262
	I1217 11:45:28.648151 3184285 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/config.json ...
	I1217 11:45:28.648522 3184285 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:45:28.648574 3184285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:45:28.671326 3184285 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36018 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:45:28.766227 3184285 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 11:45:28.771519 3184285 start.go:128] duration metric: took 5.837437297s to createHost
	I1217 11:45:28.771544 3184285 start.go:83] releasing machines lock for "no-preload-118262", held for 5.837568059s
	I1217 11:45:28.771618 3184285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-118262
	I1217 11:45:28.788742 3184285 ssh_runner.go:195] Run: cat /version.json
	I1217 11:45:28.788800 3184285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:45:28.788848 3184285 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 11:45:28.788920 3184285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:45:28.824686 3184285 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36018 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:45:28.832725 3184285 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36018 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:45:29.059847 3184285 ssh_runner.go:195] Run: systemctl --version
	I1217 11:45:29.066467 3184285 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 11:45:29.070596 3184285 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 11:45:29.070730 3184285 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 11:45:29.101697 3184285 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 11:45:29.101722 3184285 start.go:496] detecting cgroup driver to use...
	I1217 11:45:29.101776 3184285 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 11:45:29.101854 3184285 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 11:45:29.117522 3184285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 11:45:29.131041 3184285 docker.go:218] disabling cri-docker service (if available) ...
	I1217 11:45:29.131154 3184285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 11:45:29.149333 3184285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 11:45:29.169104 3184285 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 11:45:29.303102 3184285 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 11:45:29.439124 3184285 docker.go:234] disabling docker service ...
	I1217 11:45:29.439190 3184285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 11:45:29.462506 3184285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 11:45:29.478406 3184285 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 11:45:29.615528 3184285 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 11:45:29.733780 3184285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 11:45:29.748067 3184285 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 11:45:29.764396 3184285 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 11:45:29.775319 3184285 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 11:45:29.787040 3184285 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 11:45:29.787168 3184285 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 11:45:29.797733 3184285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 11:45:29.809772 3184285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 11:45:29.822292 3184285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 11:45:29.832921 3184285 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 11:45:29.842297 3184285 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 11:45:29.851723 3184285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 11:45:29.860696 3184285 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 11:45:29.870930 3184285 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 11:45:29.879978 3184285 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 11:45:29.887661 3184285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:45:30.049732 3184285 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 11:45:30.169309 3184285 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 11:45:30.169449 3184285 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 11:45:30.173665 3184285 start.go:564] Will wait 60s for crictl version
	I1217 11:45:30.173735 3184285 ssh_runner.go:195] Run: which crictl
	I1217 11:45:30.177555 3184285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 11:45:30.204223 3184285 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 11:45:30.204289 3184285 ssh_runner.go:195] Run: containerd --version
	I1217 11:45:30.239850 3184285 ssh_runner.go:195] Run: containerd --version
	I1217 11:45:30.267465 3184285 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1217 11:45:30.270530 3184285 cli_runner.go:164] Run: docker network inspect no-preload-118262 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:45:30.287095 3184285 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 11:45:30.291265 3184285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:45:30.301852 3184285 kubeadm.go:884] updating cluster {Name:no-preload-118262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-118262 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:45:30.301965 3184285 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 11:45:30.302017 3184285 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:45:30.332538 3184285 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1217 11:45:30.332566 3184285 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1217 11:45:30.332625 3184285 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:45:30.332827 3184285 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:45:30.332955 3184285 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 11:45:30.333049 3184285 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:45:30.333144 3184285 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 11:45:30.333243 3184285 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1217 11:45:30.333328 3184285 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1217 11:45:30.333431 3184285 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 11:45:30.338675 3184285 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 11:45:30.339048 3184285 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1217 11:45:30.339258 3184285 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 11:45:30.339832 3184285 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:45:30.340061 3184285 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 11:45:30.340266 3184285 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:45:30.340623 3184285 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:45:30.341545 3184285 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1217 11:45:30.542573 3184285 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-rc.1" and sha "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e"
	I1217 11:45:30.542655 3184285 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 11:45:30.576165 3184285 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" and sha "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a"
	I1217 11:45:30.576229 3184285 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" and sha "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde"
	I1217 11:45:30.576332 3184285 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 11:45:30.576376 3184285 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:45:30.579532 3184285 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1217 11:45:30.579823 3184285 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1217 11:45:30.593678 3184285 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1217 11:45:30.593814 3184285 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1217 11:45:30.595819 3184285 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" and sha "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54"
	I1217 11:45:30.595936 3184285 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:45:30.599852 3184285 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e" in container runtime
	I1217 11:45:30.599964 3184285 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 11:45:30.600056 3184285 ssh_runner.go:195] Run: which crictl
	I1217 11:45:30.617776 3184285 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde" in container runtime
	I1217 11:45:30.617882 3184285 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:45:30.617963 3184285 ssh_runner.go:195] Run: which crictl
	I1217 11:45:30.618464 3184285 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.6-0" and sha "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57"
	I1217 11:45:30.618551 3184285 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.6-0
	I1217 11:45:30.627845 3184285 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a" in container runtime
	I1217 11:45:30.627891 3184285 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 11:45:30.627940 3184285 ssh_runner.go:195] Run: which crictl
	I1217 11:45:30.651743 3184285 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1217 11:45:30.651787 3184285 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 11:45:30.651840 3184285 ssh_runner.go:195] Run: which crictl
	I1217 11:45:30.661443 3184285 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1217 11:45:30.661486 3184285 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1217 11:45:30.661535 3184285 ssh_runner.go:195] Run: which crictl
	I1217 11:45:30.673824 3184285 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54" in container runtime
	I1217 11:45:30.673936 3184285 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:45:30.674023 3184285 ssh_runner.go:195] Run: which crictl
	I1217 11:45:30.674135 3184285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 11:45:30.674249 3184285 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I1217 11:45:30.674289 3184285 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1217 11:45:30.674346 3184285 ssh_runner.go:195] Run: which crictl
	I1217 11:45:30.674455 3184285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:45:30.674565 3184285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 11:45:30.674662 3184285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 11:45:30.674757 3184285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 11:45:30.772821 3184285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 11:45:30.772935 3184285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:45:30.773005 3184285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 11:45:30.773062 3184285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1217 11:45:30.773126 3184285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:45:30.773183 3184285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 11:45:30.773245 3184285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 11:45:30.864891 3184285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 11:45:30.915110 3184285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 11:45:30.915210 3184285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:45:30.915271 3184285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 11:45:30.915338 3184285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1217 11:45:30.915408 3184285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 11:45:30.915494 3184285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 11:45:30.922865 3184285 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1217 11:45:30.923049 3184285 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1217 11:45:31.025354 3184285 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1217 11:45:31.025417 3184285 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1217 11:45:31.025467 3184285 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1217 11:45:31.025536 3184285 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1217 11:45:31.025575 3184285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 11:45:31.025613 3184285 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1217 11:45:31.025674 3184285 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1217 11:45:31.025716 3184285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1217 11:45:31.025735 3184285 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1217 11:45:31.025801 3184285 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1217 11:45:31.025834 3184285 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1217 11:45:31.025852 3184285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1217 11:45:31.072749 3184285 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1217 11:45:31.072828 3184285 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1217 11:45:31.087424 3184285 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-rc.1': No such file or directory
	I1217 11:45:31.087474 3184285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 (22434816 bytes)
	I1217 11:45:31.087540 3184285 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1': No such file or directory
	I1217 11:45:31.087557 3184285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 (15416320 bytes)
	I1217 11:45:31.087619 3184285 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1217 11:45:31.087701 3184285 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1217 11:45:31.087754 3184285 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1217 11:45:31.087773 3184285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1217 11:45:31.087815 3184285 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1': No such file or directory
	I1217 11:45:31.087831 3184285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 (20682752 bytes)
	I1217 11:45:31.087898 3184285 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1217 11:45:31.087958 3184285 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1217 11:45:31.292920 3184285 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1217 11:45:31.292967 3184285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (21761024 bytes)
	I1217 11:45:31.293014 3184285 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1217 11:45:31.293361 3184285 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1': No such file or directory
	I1217 11:45:31.293387 3184285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 (24702976 bytes)
	I1217 11:45:31.606914 3184285 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1217 11:45:31.606999 3184285 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	W1217 11:45:31.651204 3184285 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1217 11:45:31.651333 3184285 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1217 11:45:31.651392 3184285 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:45:33.505124 3184285 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5: (1.853691145s)
	I1217 11:45:33.505276 3184285 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1217 11:45:33.505322 3184285 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:45:33.505387 3184285 ssh_runner.go:195] Run: which crictl
	I1217 11:45:33.506705 3184285 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: (1.899678192s)
	I1217 11:45:33.506734 3184285 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 from cache
	I1217 11:45:33.506752 3184285 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1217 11:45:33.506799 3184285 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1217 11:45:33.512201 3184285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:45:34.953903 3184285 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: (1.447076497s)
	I1217 11:45:34.953932 3184285 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 from cache
	I1217 11:45:34.953950 3184285 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1217 11:45:34.954001 3184285 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1217 11:45:34.954071 3184285 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.441845294s)
	I1217 11:45:34.954107 3184285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:45:36.547765 3184285 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.593637468s)
	I1217 11:45:36.547845 3184285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:45:36.547912 3184285 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: (1.593895653s)
	I1217 11:45:36.547924 3184285 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 from cache
	I1217 11:45:36.547940 3184285 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1217 11:45:36.547968 3184285 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1217 11:45:38.063728 3184285 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.51573262s)
	I1217 11:45:38.063755 3184285 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1217 11:45:38.063773 3184285 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1217 11:45:38.063832 3184285 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0
	I1217 11:45:38.063899 3184285 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.516039706s)
	I1217 11:45:38.063930 3184285 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1217 11:45:38.064020 3184285 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1217 11:45:39.921861 3184285 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.857817562s)
	I1217 11:45:39.921902 3184285 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1217 11:45:39.921928 3184285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1217 11:45:39.922051 3184285 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0: (1.858196703s)
	I1217 11:45:39.922064 3184285 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1217 11:45:39.922081 3184285 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1217 11:45:39.922353 3184285 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1217 11:45:41.557947 3184285 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: (1.635566731s)
	I1217 11:45:41.557973 3184285 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 from cache
	I1217 11:45:41.557995 3184285 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1217 11:45:41.558058 3184285 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1217 11:45:42.589391 3184285 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5: (1.031308204s)
	I1217 11:45:42.589420 3184285 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1217 11:45:42.589441 3184285 cache_images.go:125] Successfully loaded all cached images
	I1217 11:45:42.589447 3184285 cache_images.go:94] duration metric: took 12.256866745s to LoadCachedImages
	I1217 11:45:42.589458 3184285 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1217 11:45:42.589555 3184285 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-118262 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-118262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 11:45:42.589623 3184285 ssh_runner.go:195] Run: sudo crictl info
	I1217 11:45:42.626401 3184285 cni.go:84] Creating CNI manager for ""
	I1217 11:45:42.626423 3184285 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 11:45:42.626440 3184285 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 11:45:42.626463 3184285 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-118262 NodeName:no-preload-118262 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:45:42.626570 3184285 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-118262"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:45:42.626642 3184285 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 11:45:42.636996 3184285 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-rc.1': No such file or directory
	
	Initiating transfer...
	I1217 11:45:42.637113 3184285 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 11:45:42.646968 3184285 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl.sha256
	I1217 11:45:42.647064 3184285 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl
	I1217 11:45:42.647825 3184285 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubelet
	I1217 11:45:42.648554 3184285 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubeadm
	I1217 11:45:42.653511 3184285 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubectl': No such file or directory
	I1217 11:45:42.653544 3184285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubectl --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl (55247032 bytes)
	I1217 11:45:43.581094 3184285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:45:43.620592 3184285 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1217 11:45:43.627594 3184285 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1217 11:45:43.627631 3184285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (54329636 bytes)
	I1217 11:45:43.742468 3184285 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1217 11:45:43.752463 3184285 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1217 11:45:43.752545 3184285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (68354232 bytes)
	I1217 11:45:44.467547 3184285 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:45:44.479447 3184285 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1217 11:45:44.494114 3184285 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 11:45:44.516253 3184285 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1217 11:45:44.536038 3184285 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:45:44.540818 3184285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:45:44.552377 3184285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:45:44.710589 3184285 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:45:44.734064 3184285 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262 for IP: 192.168.85.2
	I1217 11:45:44.734086 3184285 certs.go:195] generating shared ca certs ...
	I1217 11:45:44.734102 3184285 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:45:44.734243 3184285 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 11:45:44.734292 3184285 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 11:45:44.734305 3184285 certs.go:257] generating profile certs ...
	I1217 11:45:44.734359 3184285 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/client.key
	I1217 11:45:44.734375 3184285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/client.crt with IP's: []
	I1217 11:45:45.178770 3184285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/client.crt ...
	I1217 11:45:45.178811 3184285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/client.crt: {Name:mkc597c0df565083abe21b1e0069e65db1b37040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:45:45.179434 3184285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/client.key ...
	I1217 11:45:45.181115 3184285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/client.key: {Name:mk60fc2b54e58e61dba73b5bae37c8577342868d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:45:45.181598 3184285 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/apiserver.key.082f94c0
	I1217 11:45:45.181878 3184285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/apiserver.crt.082f94c0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1217 11:45:45.401506 3184285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/apiserver.crt.082f94c0 ...
	I1217 11:45:45.401536 3184285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/apiserver.crt.082f94c0: {Name:mk2e922c0895e2ccfa3d71ef4be20a9400f0de8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:45:45.401773 3184285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/apiserver.key.082f94c0 ...
	I1217 11:45:45.401791 3184285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/apiserver.key.082f94c0: {Name:mk3262084a730ac1cc8fcbcf49fb711ed0e76d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:45:45.401912 3184285 certs.go:382] copying /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/apiserver.crt.082f94c0 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/apiserver.crt
	I1217 11:45:45.402031 3184285 certs.go:386] copying /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/apiserver.key.082f94c0 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/apiserver.key
	I1217 11:45:45.402101 3184285 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/proxy-client.key
	I1217 11:45:45.402120 3184285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/proxy-client.crt with IP's: []
	I1217 11:45:45.774973 3184285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/proxy-client.crt ...
	I1217 11:45:45.775059 3184285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/proxy-client.crt: {Name:mkaa9096746d72c9f44fdb1f90e7769a8271ad42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:45:45.775319 3184285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/proxy-client.key ...
	I1217 11:45:45.775357 3184285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/proxy-client.key: {Name:mkaedbf4b34342447624299ecd5aa1d46753a377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:45:45.775639 3184285 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 11:45:45.775730 3184285 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 11:45:45.775756 3184285 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:45:45.775820 3184285 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 11:45:45.778409 3184285 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:45:45.778527 3184285 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 11:45:45.778654 3184285 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 11:45:45.779504 3184285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:45:45.824623 3184285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:45:45.846089 3184285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:45:45.870876 3184285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 11:45:45.892070 3184285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 11:45:45.934141 3184285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 11:45:45.964727 3184285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:45:45.990071 3184285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 11:45:46.028967 3184285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 11:45:46.075538 3184285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:45:46.098521 3184285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 11:45:46.122253 3184285 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:45:46.136865 3184285 ssh_runner.go:195] Run: openssl version
	I1217 11:45:46.144015 3184285 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 11:45:46.153152 3184285 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 11:45:46.161904 3184285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 11:45:46.166780 3184285 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 11:45:46.166844 3184285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 11:45:46.208678 3184285 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:45:46.217001 3184285 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/29245742.pem /etc/ssl/certs/3ec20f2e.0
	I1217 11:45:46.232353 3184285 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:45:46.241192 3184285 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:45:46.260944 3184285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:45:46.266047 3184285 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:45:46.266148 3184285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:45:46.336608 3184285 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:45:46.344684 3184285 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 11:45:46.352601 3184285 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 11:45:46.360813 3184285 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 11:45:46.368706 3184285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 11:45:46.373128 3184285 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 11:45:46.373223 3184285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 11:45:46.414771 3184285 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:45:46.422660 3184285 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2924574.pem /etc/ssl/certs/51391683.0
	I1217 11:45:46.432936 3184285 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:45:46.437267 3184285 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 11:45:46.437340 3184285 kubeadm.go:401] StartCluster: {Name:no-preload-118262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-118262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:45:46.437427 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 11:45:46.437487 3184285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:45:46.468183 3184285 cri.go:89] found id: ""
	I1217 11:45:46.468253 3184285 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:45:46.478434 3184285 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 11:45:46.487065 3184285 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 11:45:46.487133 3184285 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:45:46.497929 3184285 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 11:45:46.497997 3184285 kubeadm.go:158] found existing configuration files:
	
	I1217 11:45:46.498084 3184285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 11:45:46.507284 3184285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 11:45:46.507395 3184285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 11:45:46.515577 3184285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 11:45:46.524299 3184285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 11:45:46.524450 3184285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 11:45:46.532351 3184285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 11:45:46.540996 3184285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 11:45:46.541109 3184285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:45:46.548832 3184285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 11:45:46.557490 3184285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 11:45:46.557609 3184285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:45:46.565802 3184285 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 11:45:46.621265 3184285 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 11:45:46.621677 3184285 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 11:45:46.710105 3184285 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 11:45:46.710255 3184285 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 11:45:46.710342 3184285 kubeadm.go:319] OS: Linux
	I1217 11:45:46.710416 3184285 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 11:45:46.710498 3184285 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 11:45:46.710569 3184285 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 11:45:46.710650 3184285 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 11:45:46.710723 3184285 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 11:45:46.710809 3184285 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 11:45:46.710883 3184285 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 11:45:46.710965 3184285 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 11:45:46.711037 3184285 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 11:45:46.787651 3184285 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 11:45:46.787777 3184285 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 11:45:46.787913 3184285 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 11:45:46.795095 3184285 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 11:45:46.800848 3184285 out.go:252]   - Generating certificates and keys ...
	I1217 11:45:46.801011 3184285 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 11:45:46.801089 3184285 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 11:45:47.236792 3184285 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 11:45:47.502140 3184285 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 11:45:48.088160 3184285 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 11:45:48.302337 3184285 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 11:45:48.500356 3184285 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 11:45:48.500951 3184285 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-118262] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 11:45:48.762007 3184285 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 11:45:48.762256 3184285 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-118262] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 11:45:49.171080 3184285 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 11:45:49.260782 3184285 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 11:45:49.618636 3184285 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 11:45:49.618979 3184285 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 11:45:49.874573 3184285 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 11:45:50.044392 3184285 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 11:45:50.503244 3184285 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 11:45:51.074211 3184285 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 11:45:51.533277 3184285 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 11:45:51.533834 3184285 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 11:45:51.536865 3184285 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 11:45:51.541029 3184285 out.go:252]   - Booting up control plane ...
	I1217 11:45:51.541193 3184285 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 11:45:51.541293 3184285 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 11:45:51.541372 3184285 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 11:45:51.597017 3184285 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 11:45:51.597134 3184285 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 11:45:51.605474 3184285 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 11:45:51.605912 3184285 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 11:45:51.605968 3184285 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 11:45:51.848873 3184285 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 11:45:51.849032 3184285 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 11:49:51.848988 3184285 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000474741s
	I1217 11:49:51.849016 3184285 kubeadm.go:319] 
	I1217 11:49:51.849074 3184285 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 11:49:51.849107 3184285 kubeadm.go:319] 	- The kubelet is not running
	I1217 11:49:51.849212 3184285 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 11:49:51.849217 3184285 kubeadm.go:319] 
	I1217 11:49:51.849321 3184285 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 11:49:51.849353 3184285 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 11:49:51.849383 3184285 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 11:49:51.849387 3184285 kubeadm.go:319] 
	I1217 11:49:51.854931 3184285 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 11:49:51.855468 3184285 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 11:49:51.855640 3184285 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:49:51.856478 3184285 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 11:49:51.856526 3184285 kubeadm.go:319] 
	I1217 11:49:51.856638 3184285 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1217 11:49:51.856795 3184285 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-118262] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-118262] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000474741s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-118262] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-118262] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000474741s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 11:49:51.856894 3184285 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 11:49:52.277719 3184285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:49:52.292537 3184285 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 11:49:52.292613 3184285 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:49:52.302828 3184285 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 11:49:52.302846 3184285 kubeadm.go:158] found existing configuration files:
	
	I1217 11:49:52.302898 3184285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 11:49:52.313123 3184285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 11:49:52.313189 3184285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 11:49:52.321690 3184285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 11:49:52.331730 3184285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 11:49:52.331844 3184285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 11:49:52.340184 3184285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 11:49:52.348768 3184285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 11:49:52.348833 3184285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:49:52.362335 3184285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 11:49:52.371680 3184285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 11:49:52.371810 3184285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:49:52.380071 3184285 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 11:49:52.426431 3184285 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 11:49:52.426840 3184285 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 11:49:52.509728 3184285 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 11:49:52.509841 3184285 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 11:49:52.509909 3184285 kubeadm.go:319] OS: Linux
	I1217 11:49:52.509976 3184285 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 11:49:52.510055 3184285 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 11:49:52.510128 3184285 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 11:49:52.510211 3184285 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 11:49:52.510276 3184285 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 11:49:52.510378 3184285 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 11:49:52.510450 3184285 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 11:49:52.510512 3184285 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 11:49:52.510566 3184285 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 11:49:52.578615 3184285 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 11:49:52.578920 3184285 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 11:49:52.579049 3184285 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 11:49:52.585015 3184285 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 11:49:52.590726 3184285 out.go:252]   - Generating certificates and keys ...
	I1217 11:49:52.590881 3184285 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 11:49:52.590983 3184285 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 11:49:52.591105 3184285 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 11:49:52.591202 3184285 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 11:49:52.591317 3184285 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 11:49:52.591407 3184285 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 11:49:52.591509 3184285 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 11:49:52.591602 3184285 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 11:49:52.591754 3184285 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 11:49:52.591879 3184285 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 11:49:52.591954 3184285 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 11:49:52.592047 3184285 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 11:49:52.983215 3184285 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 11:49:53.575992 3184285 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 11:49:53.987959 3184285 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 11:49:54.197165 3184285 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 11:49:54.496639 3184285 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 11:49:54.497214 3184285 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 11:49:54.499892 3184285 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 11:49:54.503136 3184285 out.go:252]   - Booting up control plane ...
	I1217 11:49:54.503250 3184285 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 11:49:54.503333 3184285 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 11:49:54.503405 3184285 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 11:49:54.524103 3184285 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 11:49:54.524550 3184285 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 11:49:54.532570 3184285 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 11:49:54.532919 3184285 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 11:49:54.532970 3184285 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 11:49:54.678236 3184285 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 11:49:54.678359 3184285 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 11:53:54.678520 3184285 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000061049s
	I1217 11:53:54.678562 3184285 kubeadm.go:319] 
	I1217 11:53:54.678668 3184285 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 11:53:54.678735 3184285 kubeadm.go:319] 	- The kubelet is not running
	I1217 11:53:54.679061 3184285 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 11:53:54.679078 3184285 kubeadm.go:319] 
	I1217 11:53:54.679259 3184285 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 11:53:54.679319 3184285 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 11:53:54.679610 3184285 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 11:53:54.679619 3184285 kubeadm.go:319] 
	I1217 11:53:54.684331 3184285 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 11:53:54.684946 3184285 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 11:53:54.685447 3184285 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:53:54.685728 3184285 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 11:53:54.685741 3184285 kubeadm.go:319] 
	I1217 11:53:54.685819 3184285 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 11:53:54.685877 3184285 kubeadm.go:403] duration metric: took 8m8.248541569s to StartCluster
	I1217 11:53:54.685915 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:53:54.685995 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:53:54.712731 3184285 cri.go:89] found id: ""
	I1217 11:53:54.712767 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.712778 3184285 logs.go:284] No container was found matching "kube-apiserver"
	I1217 11:53:54.712784 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:53:54.712847 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:53:54.738074 3184285 cri.go:89] found id: ""
	I1217 11:53:54.738101 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.738110 3184285 logs.go:284] No container was found matching "etcd"
	I1217 11:53:54.738116 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:53:54.738176 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:53:54.763115 3184285 cri.go:89] found id: ""
	I1217 11:53:54.763142 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.763151 3184285 logs.go:284] No container was found matching "coredns"
	I1217 11:53:54.763160 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:53:54.763223 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:53:54.788613 3184285 cri.go:89] found id: ""
	I1217 11:53:54.788637 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.788646 3184285 logs.go:284] No container was found matching "kube-scheduler"
	I1217 11:53:54.788652 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:53:54.788710 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:53:54.814171 3184285 cri.go:89] found id: ""
	I1217 11:53:54.814207 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.814216 3184285 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:53:54.814222 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:53:54.814287 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:53:54.839339 3184285 cri.go:89] found id: ""
	I1217 11:53:54.839362 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.839370 3184285 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 11:53:54.839376 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:53:54.839434 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:53:54.866460 3184285 cri.go:89] found id: ""
	I1217 11:53:54.866486 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.866495 3184285 logs.go:284] No container was found matching "kindnet"
	I1217 11:53:54.866505 3184285 logs.go:123] Gathering logs for container status ...
	I1217 11:53:54.866516 3184285 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:53:54.897933 3184285 logs.go:123] Gathering logs for kubelet ...
	I1217 11:53:54.897961 3184285 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:53:54.955505 3184285 logs.go:123] Gathering logs for dmesg ...
	I1217 11:53:54.955540 3184285 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:53:54.972937 3184285 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:53:54.972967 3184285 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:53:55.055017 3184285 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 11:53:55.043684    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.046958    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.047779    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.049563    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.050103    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 11:53:55.043684    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.046958    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.047779    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.049563    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.050103    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:53:55.055059 3184285 logs.go:123] Gathering logs for containerd ...
	I1217 11:53:55.055072 3184285 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1217 11:53:55.106009 3184285 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061049s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 11:53:55.106084 3184285 out.go:285] * 
	* 
	W1217 11:53:55.106176 3184285 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061049s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061049s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 11:53:55.106224 3184285 out.go:285] * 
	* 
	W1217 11:53:55.108372 3184285 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:53:55.114186 3184285 out.go:203] 
	W1217 11:53:55.117966 3184285 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061049s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061049s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 11:53:55.118025 3184285 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 11:53:55.118053 3184285 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 11:53:55.121856 3184285 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p no-preload-118262 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-118262
helpers_test.go:244: (dbg) docker inspect no-preload-118262:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362",
	        "Created": "2025-12-17T11:45:23.889791979Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3184585,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:45:23.975335333Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/hostname",
	        "HostsPath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/hosts",
	        "LogPath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362-json.log",
	        "Name": "/no-preload-118262",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-118262:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-118262",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362",
	                "LowerDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82-init/diff:/var/lib/docker/overlay2/aa1c3cb837db05afa9c265c464cc269fa9c11658f422c1c8858e1287ac952f12/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-118262",
	                "Source": "/var/lib/docker/volumes/no-preload-118262/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-118262",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-118262",
	                "name.minikube.sigs.k8s.io": "no-preload-118262",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "edfd8c84516eb23c0ad2b26b7726367c3e837ddca981000c80312ea31fd9a26a",
	            "SandboxKey": "/var/run/docker/netns/edfd8c84516e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36018"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36019"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36022"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36020"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36021"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-118262": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:56:4e:97:d8:37",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3227851744df2bdac9c367dc789ddfe2892f877b7b9b947cdcd81cb2897c4ba1",
	                    "EndpointID": "b3f5ff720ab2b961fc2a2904ac219198576784cb510a37f2350f10bf17783082",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-118262",
	                        "4578079103f7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-118262 -n no-preload-118262
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-118262 -n no-preload-118262: exit status 6 (333.15923ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 11:53:55.558230 3209600 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-118262" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-118262 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ start   │ -p no-preload-118262 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:45 UTC │                     │
	│ start   │ -p cert-expiration-182607 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                          │ cert-expiration-182607       │ jenkins │ v1.37.0 │ 17 Dec 25 11:45 UTC │ 17 Dec 25 11:45 UTC │
	│ delete  │ -p cert-expiration-182607                                                                                                                                                                                                                                │ cert-expiration-182607       │ jenkins │ v1.37.0 │ 17 Dec 25 11:45 UTC │ 17 Dec 25 11:45 UTC │
	│ start   │ -p embed-certs-628462 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:45 UTC │ 17 Dec 25 11:46 UTC │
	│ addons  │ enable metrics-server -p embed-certs-628462 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                 │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:46 UTC │ 17 Dec 25 11:46 UTC │
	│ stop    │ -p embed-certs-628462 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:46 UTC │ 17 Dec 25 11:47 UTC │
	│ addons  │ enable dashboard -p embed-certs-628462 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │ 17 Dec 25 11:47 UTC │
	│ start   │ -p embed-certs-628462 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │ 17 Dec 25 11:47 UTC │
	│ image   │ embed-certs-628462 image list --format=json                                                                                                                                                                                                              │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ pause   │ -p embed-certs-628462 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ unpause │ -p embed-certs-628462 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p embed-certs-628462                                                                                                                                                                                                                                    │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p embed-certs-628462                                                                                                                                                                                                                                    │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p disable-driver-mounts-003095                                                                                                                                                                                                                          │ disable-driver-mounts-003095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ start   │ -p default-k8s-diff-port-224095 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:49 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-224095 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ stop    │ -p default-k8s-diff-port-224095 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-224095 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ start   │ -p default-k8s-diff-port-224095 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:50 UTC │
	│ image   │ default-k8s-diff-port-224095 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ pause   │ -p default-k8s-diff-port-224095 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ unpause │ -p default-k8s-diff-port-224095 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ delete  │ -p default-k8s-diff-port-224095                                                                                                                                                                                                                          │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ delete  │ -p default-k8s-diff-port-224095                                                                                                                                                                                                                          │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ start   │ -p newest-cni-669680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:50:33
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:50:33.770675 3204903 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:50:33.770892 3204903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:50:33.770930 3204903 out.go:374] Setting ErrFile to fd 2...
	I1217 11:50:33.770950 3204903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:50:33.771242 3204903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 11:50:33.771720 3204903 out.go:368] Setting JSON to false
	I1217 11:50:33.772826 3204903 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":63184,"bootTime":1765909050,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 11:50:33.772934 3204903 start.go:143] virtualization:  
	I1217 11:50:33.777422 3204903 out.go:179] * [newest-cni-669680] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 11:50:33.781147 3204903 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 11:50:33.781258 3204903 notify.go:221] Checking for updates...
	I1217 11:50:33.787770 3204903 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:50:33.790969 3204903 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 11:50:33.794108 3204903 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 11:50:33.797396 3204903 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 11:50:33.800914 3204903 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:50:33.804694 3204903 config.go:182] Loaded profile config "no-preload-118262": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:50:33.804819 3204903 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:50:33.836693 3204903 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 11:50:33.836824 3204903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:50:33.905198 3204903 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:50:33.886446399 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:50:33.905310 3204903 docker.go:319] overlay module found
	I1217 11:50:33.908522 3204903 out.go:179] * Using the docker driver based on user configuration
	I1217 11:50:33.911483 3204903 start.go:309] selected driver: docker
	I1217 11:50:33.911512 3204903 start.go:927] validating driver "docker" against <nil>
	I1217 11:50:33.911528 3204903 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:50:33.912303 3204903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:50:33.968344 3204903 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:50:33.958386366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:50:33.968600 3204903 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1217 11:50:33.968643 3204903 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1217 11:50:33.968883 3204903 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 11:50:33.971848 3204903 out.go:179] * Using Docker driver with root privileges
	I1217 11:50:33.974707 3204903 cni.go:84] Creating CNI manager for ""
	I1217 11:50:33.974785 3204903 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 11:50:33.974803 3204903 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 11:50:33.974912 3204903 start.go:353] cluster config:
	{Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:50:33.980004 3204903 out.go:179] * Starting "newest-cni-669680" primary control-plane node in "newest-cni-669680" cluster
	I1217 11:50:33.982917 3204903 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 11:50:33.985952 3204903 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:50:33.988892 3204903 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 11:50:33.988945 3204903 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1217 11:50:33.988990 3204903 cache.go:65] Caching tarball of preloaded images
	I1217 11:50:33.989015 3204903 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:50:33.989111 3204903 preload.go:238] Found /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 11:50:33.989123 3204903 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1217 11:50:33.989239 3204903 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/config.json ...
	I1217 11:50:33.989268 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/config.json: {Name:mk0a64d844d14a82596feb52de4f9f10fa21ee9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:34.014470 3204903 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:50:34.014499 3204903 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 11:50:34.014516 3204903 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:50:34.014550 3204903 start.go:360] acquireMachinesLock for newest-cni-669680: {Name:mk48c8383b245a4b70f2208fe2e76b80693bbb09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:50:34.014670 3204903 start.go:364] duration metric: took 97.672µs to acquireMachinesLock for "newest-cni-669680"
	I1217 11:50:34.014703 3204903 start.go:93] Provisioning new machine with config: &{Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 11:50:34.014791 3204903 start.go:125] createHost starting for "" (driver="docker")
	I1217 11:50:34.018329 3204903 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 11:50:34.018591 3204903 start.go:159] libmachine.API.Create for "newest-cni-669680" (driver="docker")
	I1217 11:50:34.018632 3204903 client.go:173] LocalClient.Create starting
	I1217 11:50:34.018712 3204903 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem
	I1217 11:50:34.018752 3204903 main.go:143] libmachine: Decoding PEM data...
	I1217 11:50:34.018777 3204903 main.go:143] libmachine: Parsing certificate...
	I1217 11:50:34.018837 3204903 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem
	I1217 11:50:34.018864 3204903 main.go:143] libmachine: Decoding PEM data...
	I1217 11:50:34.018877 3204903 main.go:143] libmachine: Parsing certificate...
	I1217 11:50:34.019266 3204903 cli_runner.go:164] Run: docker network inspect newest-cni-669680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 11:50:34.036819 3204903 cli_runner.go:211] docker network inspect newest-cni-669680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 11:50:34.036904 3204903 network_create.go:284] running [docker network inspect newest-cni-669680] to gather additional debugging logs...
	I1217 11:50:34.036925 3204903 cli_runner.go:164] Run: docker network inspect newest-cni-669680
	W1217 11:50:34.053220 3204903 cli_runner.go:211] docker network inspect newest-cni-669680 returned with exit code 1
	I1217 11:50:34.053254 3204903 network_create.go:287] error running [docker network inspect newest-cni-669680]: docker network inspect newest-cni-669680: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-669680 not found
	I1217 11:50:34.053268 3204903 network_create.go:289] output of [docker network inspect newest-cni-669680]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-669680 not found
	
	** /stderr **
	I1217 11:50:34.053385 3204903 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:50:34.073660 3204903 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f429477a79c4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:ea:a9:f2:52:01} reservation:<nil>}
	I1217 11:50:34.074039 3204903 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e0545776686c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:70:9e:49:ed:7d} reservation:<nil>}
	I1217 11:50:34.074407 3204903 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-279becfad84b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:b7:62:6e:a9:ee} reservation:<nil>}
	I1217 11:50:34.074906 3204903 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a004b0}
	I1217 11:50:34.074944 3204903 network_create.go:124] attempt to create docker network newest-cni-669680 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1217 11:50:34.075027 3204903 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-669680 newest-cni-669680
	I1217 11:50:34.133594 3204903 network_create.go:108] docker network newest-cni-669680 192.168.76.0/24 created
	I1217 11:50:34.133624 3204903 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-669680" container
	I1217 11:50:34.133717 3204903 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 11:50:34.150255 3204903 cli_runner.go:164] Run: docker volume create newest-cni-669680 --label name.minikube.sigs.k8s.io=newest-cni-669680 --label created_by.minikube.sigs.k8s.io=true
	I1217 11:50:34.168619 3204903 oci.go:103] Successfully created a docker volume newest-cni-669680
	I1217 11:50:34.168718 3204903 cli_runner.go:164] Run: docker run --rm --name newest-cni-669680-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-669680 --entrypoint /usr/bin/test -v newest-cni-669680:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 11:50:34.733678 3204903 oci.go:107] Successfully prepared a docker volume newest-cni-669680
	I1217 11:50:34.733775 3204903 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 11:50:34.733794 3204903 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 11:50:34.733863 3204903 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-669680:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 11:50:38.835811 3204903 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-669680:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.101893526s)
	I1217 11:50:38.835847 3204903 kic.go:203] duration metric: took 4.10204956s to extract preloaded images to volume ...
	W1217 11:50:38.835990 3204903 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 11:50:38.836106 3204903 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 11:50:38.889030 3204903 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-669680 --name newest-cni-669680 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-669680 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-669680 --network newest-cni-669680 --ip 192.168.76.2 --volume newest-cni-669680:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 11:50:39.198138 3204903 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Running}}
	I1217 11:50:39.220630 3204903 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 11:50:39.245099 3204903 cli_runner.go:164] Run: docker exec newest-cni-669680 stat /var/lib/dpkg/alternatives/iptables
	I1217 11:50:39.296762 3204903 oci.go:144] the created container "newest-cni-669680" has a running status.
	I1217 11:50:39.296821 3204903 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519...
	I1217 11:50:39.301246 3204903 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519.pub --> /home/docker/.ssh/authorized_keys (81 bytes)
	I1217 11:50:39.326812 3204903 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 11:50:39.353133 3204903 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 11:50:39.353152 3204903 kic_runner.go:114] Args: [docker exec --privileged newest-cni-669680 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 11:50:39.407725 3204903 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 11:50:39.427651 3204903 machine.go:94] provisionDockerMachine start ...
	I1217 11:50:39.427814 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:39.449037 3204903 main.go:143] libmachine: Using SSH client type: native
	I1217 11:50:39.449153 3204903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36043 <nil> <nil>}
	I1217 11:50:39.449161 3204903 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:50:39.449689 3204903 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46730->127.0.0.1:36043: read: connection reset by peer
	I1217 11:50:42.588075 3204903 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-669680
	
	I1217 11:50:42.588101 3204903 ubuntu.go:182] provisioning hostname "newest-cni-669680"
	I1217 11:50:42.588181 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:42.610896 3204903 main.go:143] libmachine: Using SSH client type: native
	I1217 11:50:42.611004 3204903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36043 <nil> <nil>}
	I1217 11:50:42.611019 3204903 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-669680 && echo "newest-cni-669680" | sudo tee /etc/hostname
	I1217 11:50:42.758221 3204903 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-669680
	
	I1217 11:50:42.758323 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:42.776901 3204903 main.go:143] libmachine: Using SSH client type: native
	I1217 11:50:42.777030 3204903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36043 <nil> <nil>}
	I1217 11:50:42.777054 3204903 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-669680' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-669680/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-669680' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:50:42.909042 3204903 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:50:42.909069 3204903 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 11:50:42.909089 3204903 ubuntu.go:190] setting up certificates
	I1217 11:50:42.909098 3204903 provision.go:84] configureAuth start
	I1217 11:50:42.909162 3204903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 11:50:42.927260 3204903 provision.go:143] copyHostCerts
	I1217 11:50:42.927326 3204903 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 11:50:42.927335 3204903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 11:50:42.927414 3204903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 11:50:42.927515 3204903 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 11:50:42.927521 3204903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 11:50:42.927546 3204903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 11:50:42.927611 3204903 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 11:50:42.927615 3204903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 11:50:42.927639 3204903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 11:50:42.927694 3204903 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.newest-cni-669680 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-669680]
	I1217 11:50:43.131974 3204903 provision.go:177] copyRemoteCerts
	I1217 11:50:43.132056 3204903 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:50:43.132097 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:43.150139 3204903 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36043 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 11:50:43.244232 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 11:50:43.264226 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 11:50:43.288821 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 11:50:43.309853 3204903 provision.go:87] duration metric: took 400.734271ms to configureAuth
	I1217 11:50:43.309977 3204903 ubuntu.go:206] setting minikube options for container-runtime
	I1217 11:50:43.310242 3204903 config.go:182] Loaded profile config "newest-cni-669680": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:50:43.310275 3204903 machine.go:97] duration metric: took 3.882542746s to provisionDockerMachine
	I1217 11:50:43.310297 3204903 client.go:176] duration metric: took 9.291651647s to LocalClient.Create
	I1217 11:50:43.310348 3204903 start.go:167] duration metric: took 9.291744428s to libmachine.API.Create "newest-cni-669680"
	I1217 11:50:43.310376 3204903 start.go:293] postStartSetup for "newest-cni-669680" (driver="docker")
	I1217 11:50:43.310413 3204903 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 11:50:43.310526 3204903 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 11:50:43.310608 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:43.332854 3204903 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36043 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 11:50:43.428662 3204903 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 11:50:43.432294 3204903 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 11:50:43.432325 3204903 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 11:50:43.432337 3204903 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 11:50:43.432397 3204903 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 11:50:43.432547 3204903 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 11:50:43.432651 3204903 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 11:50:43.440004 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 11:50:43.457337 3204903 start.go:296] duration metric: took 146.930324ms for postStartSetup
	I1217 11:50:43.457705 3204903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 11:50:43.474473 3204903 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/config.json ...
	I1217 11:50:43.474760 3204903 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:50:43.474809 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:43.491508 3204903 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36043 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 11:50:43.585952 3204903 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 11:50:43.591149 3204903 start.go:128] duration metric: took 9.576343313s to createHost
	I1217 11:50:43.591175 3204903 start.go:83] releasing machines lock for "newest-cni-669680", held for 9.576490895s
	I1217 11:50:43.591260 3204903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 11:50:43.609275 3204903 ssh_runner.go:195] Run: cat /version.json
	I1217 11:50:43.609319 3204903 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 11:50:43.609330 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:43.609377 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:43.631237 3204903 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36043 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 11:50:43.636805 3204903 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36043 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 11:50:43.724501 3204903 ssh_runner.go:195] Run: systemctl --version
	I1217 11:50:43.819531 3204903 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 11:50:43.823938 3204903 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 11:50:43.824018 3204903 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 11:50:43.852205 3204903 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 11:50:43.852281 3204903 start.go:496] detecting cgroup driver to use...
	I1217 11:50:43.852330 3204903 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 11:50:43.852407 3204903 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 11:50:43.867801 3204903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 11:50:43.881415 3204903 docker.go:218] disabling cri-docker service (if available) ...
	I1217 11:50:43.881505 3204903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 11:50:43.898869 3204903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 11:50:43.917331 3204903 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 11:50:44.042660 3204903 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 11:50:44.176397 3204903 docker.go:234] disabling docker service ...
	I1217 11:50:44.176490 3204903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 11:50:44.197465 3204903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 11:50:44.211041 3204903 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 11:50:44.324043 3204903 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 11:50:44.437310 3204903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 11:50:44.451253 3204903 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 11:50:44.468227 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 11:50:44.477660 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 11:50:44.487940 3204903 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 11:50:44.488046 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 11:50:44.497581 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 11:50:44.506638 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 11:50:44.516061 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 11:50:44.524921 3204903 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 11:50:44.533457 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 11:50:44.542606 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 11:50:44.551989 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 11:50:44.561578 3204903 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 11:50:44.570051 3204903 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 11:50:44.577822 3204903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:50:44.688867 3204903 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 11:50:44.840667 3204903 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 11:50:44.840788 3204903 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 11:50:44.845363 3204903 start.go:564] Will wait 60s for crictl version
	I1217 11:50:44.845485 3204903 ssh_runner.go:195] Run: which crictl
	I1217 11:50:44.849376 3204903 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 11:50:44.883387 3204903 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 11:50:44.883511 3204903 ssh_runner.go:195] Run: containerd --version
	I1217 11:50:44.905807 3204903 ssh_runner.go:195] Run: containerd --version
	I1217 11:50:44.930438 3204903 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1217 11:50:44.933446 3204903 cli_runner.go:164] Run: docker network inspect newest-cni-669680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:50:44.950378 3204903 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 11:50:44.954519 3204903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:50:44.968645 3204903 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 11:50:44.971593 3204903 kubeadm.go:884] updating cluster {Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:50:44.971744 3204903 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 11:50:44.971843 3204903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:50:45.011583 3204903 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 11:50:45.011607 3204903 containerd.go:534] Images already preloaded, skipping extraction
	I1217 11:50:45.011729 3204903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:50:45.074368 3204903 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 11:50:45.074393 3204903 cache_images.go:86] Images are preloaded, skipping loading
	I1217 11:50:45.074401 3204903 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1217 11:50:45.074511 3204903 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-669680 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 11:50:45.074583 3204903 ssh_runner.go:195] Run: sudo crictl info
	I1217 11:50:45.124774 3204903 cni.go:84] Creating CNI manager for ""
	I1217 11:50:45.124803 3204903 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 11:50:45.124823 3204903 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 11:50:45.124848 3204903 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-669680 NodeName:newest-cni-669680 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:50:45.125086 3204903 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-669680"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:50:45.125178 3204903 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 11:50:45.136963 3204903 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 11:50:45.137076 3204903 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:50:45.146865 3204903 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1217 11:50:45.164693 3204903 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 11:50:45.183943 3204903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1217 11:50:45.201780 3204903 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:50:45.209411 3204903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:50:45.227993 3204903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:50:45.376186 3204903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:50:45.396221 3204903 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680 for IP: 192.168.76.2
	I1217 11:50:45.396257 3204903 certs.go:195] generating shared ca certs ...
	I1217 11:50:45.396275 3204903 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:45.396432 3204903 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 11:50:45.396497 3204903 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 11:50:45.396511 3204903 certs.go:257] generating profile certs ...
	I1217 11:50:45.396576 3204903 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.key
	I1217 11:50:45.396594 3204903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.crt with IP's: []
	I1217 11:50:45.498992 3204903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.crt ...
	I1217 11:50:45.499023 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.crt: {Name:mkfb66bec095c72b7c1a0e563529baf2180c300c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:45.499228 3204903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.key ...
	I1217 11:50:45.499243 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.key: {Name:mk7292acf4e53dd5012d44cc923a43c80ae9a7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:45.499340 3204903 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key.e7646161
	I1217 11:50:45.499360 3204903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt.e7646161 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1217 11:50:45.885492 3204903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt.e7646161 ...
	I1217 11:50:45.885525 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt.e7646161: {Name:mkc2aab84e543777fe00770e300fac9f47cd579f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:45.885732 3204903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key.e7646161 ...
	I1217 11:50:45.885749 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key.e7646161: {Name:mk25ae271c13c745dd8ef046c320963d505be1ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:45.885837 3204903 certs.go:382] copying /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt.e7646161 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt
	I1217 11:50:45.885921 3204903 certs.go:386] copying /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key.e7646161 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key
	I1217 11:50:45.885986 3204903 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key
	I1217 11:50:45.886007 3204903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.crt with IP's: []
	I1217 11:50:46.187502 3204903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.crt ...
	I1217 11:50:46.187541 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.crt: {Name:mk12f9e3a4ac82afa8ef3e938731ab0419f581a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:46.187741 3204903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key ...
	I1217 11:50:46.187756 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key: {Name:mk258e31e31368b8ae182e758b28fd15f98dabb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:46.187958 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 11:50:46.188008 3204903 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 11:50:46.188031 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:50:46.188065 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 11:50:46.188095 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:50:46.188125 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 11:50:46.188174 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 11:50:46.188855 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:50:46.209179 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:50:46.229625 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:50:46.248348 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 11:50:46.280053 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 11:50:46.299540 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 11:50:46.331420 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:50:46.354812 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 11:50:46.379741 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:50:46.398349 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 11:50:46.416502 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 11:50:46.434656 3204903 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:50:46.448045 3204903 ssh_runner.go:195] Run: openssl version
	I1217 11:50:46.454404 3204903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:50:46.462383 3204903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:50:46.470220 3204903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:50:46.474117 3204903 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:50:46.474205 3204903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:50:46.515776 3204903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:50:46.523521 3204903 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 11:50:46.531167 3204903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 11:50:46.538808 3204903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 11:50:46.546526 3204903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 11:50:46.550351 3204903 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 11:50:46.550420 3204903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 11:50:46.591537 3204903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:50:46.599582 3204903 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2924574.pem /etc/ssl/certs/51391683.0
	I1217 11:50:46.607230 3204903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 11:50:46.615038 3204903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 11:50:46.623144 3204903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 11:50:46.627219 3204903 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 11:50:46.627293 3204903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 11:50:46.668548 3204903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:50:46.676480 3204903 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/29245742.pem /etc/ssl/certs/3ec20f2e.0
	I1217 11:50:46.684254 3204903 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:50:46.687989 3204903 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 11:50:46.688053 3204903 kubeadm.go:401] StartCluster: {Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:50:46.688186 3204903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 11:50:46.688251 3204903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:50:46.715504 3204903 cri.go:89] found id: ""
	I1217 11:50:46.715577 3204903 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:50:46.723636 3204903 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 11:50:46.731913 3204903 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 11:50:46.732013 3204903 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:50:46.740391 3204903 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 11:50:46.740486 3204903 kubeadm.go:158] found existing configuration files:
	
	I1217 11:50:46.740554 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 11:50:46.748658 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 11:50:46.748734 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 11:50:46.756251 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 11:50:46.764744 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 11:50:46.764812 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 11:50:46.772495 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 11:50:46.780304 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 11:50:46.780374 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:50:46.787858 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 11:50:46.795827 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 11:50:46.795920 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:50:46.803940 3204903 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 11:50:46.842300 3204903 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 11:50:46.842364 3204903 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 11:50:46.914982 3204903 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 11:50:46.915066 3204903 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 11:50:46.915120 3204903 kubeadm.go:319] OS: Linux
	I1217 11:50:46.915224 3204903 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 11:50:46.915306 3204903 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 11:50:46.915380 3204903 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 11:50:46.915458 3204903 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 11:50:46.915534 3204903 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 11:50:46.915612 3204903 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 11:50:46.915688 3204903 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 11:50:46.915760 3204903 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 11:50:46.915833 3204903 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 11:50:46.991927 3204903 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 11:50:46.992117 3204903 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 11:50:46.992264 3204903 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 11:50:47.011559 3204903 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 11:50:47.018032 3204903 out.go:252]   - Generating certificates and keys ...
	I1217 11:50:47.018195 3204903 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 11:50:47.018301 3204903 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 11:50:47.129470 3204903 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 11:50:47.445618 3204903 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 11:50:47.915158 3204903 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 11:50:48.499656 3204903 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 11:50:48.596834 3204903 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 11:50:48.597124 3204903 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-669680] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 11:50:48.753661 3204903 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 11:50:48.754010 3204903 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-669680] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 11:50:48.982189 3204903 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 11:50:49.176711 3204903 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 11:50:49.329925 3204903 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 11:50:49.330545 3204903 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 11:50:49.669219 3204903 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 11:50:49.769896 3204903 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 11:50:50.134620 3204903 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 11:50:50.518232 3204903 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 11:50:51.159536 3204903 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 11:50:51.160438 3204903 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 11:50:51.163380 3204903 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 11:50:51.167113 3204903 out.go:252]   - Booting up control plane ...
	I1217 11:50:51.167275 3204903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 11:50:51.167359 3204903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 11:50:51.168888 3204903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 11:50:51.187617 3204903 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 11:50:51.187958 3204903 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 11:50:51.195573 3204903 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 11:50:51.195900 3204903 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 11:50:51.195946 3204903 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 11:50:51.332866 3204903 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 11:50:51.332987 3204903 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 11:53:54.678520 3184285 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000061049s
	I1217 11:53:54.678562 3184285 kubeadm.go:319] 
	I1217 11:53:54.678668 3184285 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 11:53:54.678735 3184285 kubeadm.go:319] 	- The kubelet is not running
	I1217 11:53:54.679061 3184285 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 11:53:54.679078 3184285 kubeadm.go:319] 
	I1217 11:53:54.679259 3184285 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 11:53:54.679319 3184285 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 11:53:54.679610 3184285 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 11:53:54.679619 3184285 kubeadm.go:319] 
	I1217 11:53:54.684331 3184285 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 11:53:54.684946 3184285 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 11:53:54.685447 3184285 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:53:54.685728 3184285 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 11:53:54.685741 3184285 kubeadm.go:319] 
	I1217 11:53:54.685819 3184285 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 11:53:54.685877 3184285 kubeadm.go:403] duration metric: took 8m8.248541569s to StartCluster
	I1217 11:53:54.685915 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:53:54.685995 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:53:54.712731 3184285 cri.go:89] found id: ""
	I1217 11:53:54.712767 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.712778 3184285 logs.go:284] No container was found matching "kube-apiserver"
	I1217 11:53:54.712784 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:53:54.712847 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:53:54.738074 3184285 cri.go:89] found id: ""
	I1217 11:53:54.738101 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.738110 3184285 logs.go:284] No container was found matching "etcd"
	I1217 11:53:54.738116 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:53:54.738176 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:53:54.763115 3184285 cri.go:89] found id: ""
	I1217 11:53:54.763142 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.763151 3184285 logs.go:284] No container was found matching "coredns"
	I1217 11:53:54.763160 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:53:54.763223 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:53:54.788613 3184285 cri.go:89] found id: ""
	I1217 11:53:54.788637 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.788646 3184285 logs.go:284] No container was found matching "kube-scheduler"
	I1217 11:53:54.788652 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:53:54.788710 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:53:54.814171 3184285 cri.go:89] found id: ""
	I1217 11:53:54.814207 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.814216 3184285 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:53:54.814222 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:53:54.814287 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:53:54.839339 3184285 cri.go:89] found id: ""
	I1217 11:53:54.839362 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.839370 3184285 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 11:53:54.839376 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:53:54.839434 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:53:54.866460 3184285 cri.go:89] found id: ""
	I1217 11:53:54.866486 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.866495 3184285 logs.go:284] No container was found matching "kindnet"
	I1217 11:53:54.866505 3184285 logs.go:123] Gathering logs for container status ...
	I1217 11:53:54.866516 3184285 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:53:54.897933 3184285 logs.go:123] Gathering logs for kubelet ...
	I1217 11:53:54.897961 3184285 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:53:54.955505 3184285 logs.go:123] Gathering logs for dmesg ...
	I1217 11:53:54.955540 3184285 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:53:54.972937 3184285 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:53:54.972967 3184285 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:53:55.055017 3184285 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 11:53:55.043684    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.046958    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.047779    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.049563    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.050103    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 11:53:55.043684    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.046958    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.047779    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.049563    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.050103    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:53:55.055059 3184285 logs.go:123] Gathering logs for containerd ...
	I1217 11:53:55.055072 3184285 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1217 11:53:55.106009 3184285 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061049s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 11:53:55.106084 3184285 out.go:285] * 
	W1217 11:53:55.106176 3184285 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061049s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 11:53:55.106224 3184285 out.go:285] * 
	W1217 11:53:55.108372 3184285 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:53:55.114186 3184285 out.go:203] 
	W1217 11:53:55.117966 3184285 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061049s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 11:53:55.118025 3184285 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 11:53:55.118053 3184285 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 11:53:55.121856 3184285 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 11:45:33 no-preload-118262 containerd[757]: time="2025-12-17T11:45:33.505204167Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:34 no-preload-118262 containerd[757]: time="2025-12-17T11:45:34.939956582Z" level=info msg="No images store for sha256:93523640e0a56d4e8b1c8a3497b218ff0cad45dc41c5de367125514543645a73"
	Dec 17 11:45:34 no-preload-118262 containerd[757]: time="2025-12-17T11:45:34.942293323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\""
	Dec 17 11:45:34 no-preload-118262 containerd[757]: time="2025-12-17T11:45:34.949096337Z" level=info msg="ImageCreate event name:\"sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:34 no-preload-118262 containerd[757]: time="2025-12-17T11:45:34.949777101Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:36 no-preload-118262 containerd[757]: time="2025-12-17T11:45:36.535897084Z" level=info msg="No images store for sha256:e78123e3dd3a833d4e1feffb3fc0a121f3dd689abacf9b7f8984f026b95c56ec"
	Dec 17 11:45:36 no-preload-118262 containerd[757]: time="2025-12-17T11:45:36.538753041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\""
	Dec 17 11:45:36 no-preload-118262 containerd[757]: time="2025-12-17T11:45:36.553736301Z" level=info msg="ImageCreate event name:\"sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:36 no-preload-118262 containerd[757]: time="2025-12-17T11:45:36.554961027Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:38 no-preload-118262 containerd[757]: time="2025-12-17T11:45:38.053706291Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 17 11:45:38 no-preload-118262 containerd[757]: time="2025-12-17T11:45:38.056518475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 17 11:45:38 no-preload-118262 containerd[757]: time="2025-12-17T11:45:38.074021896Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:38 no-preload-118262 containerd[757]: time="2025-12-17T11:45:38.075564022Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:39 no-preload-118262 containerd[757]: time="2025-12-17T11:45:39.899212727Z" level=info msg="No images store for sha256:78d3927c747311a5af27ec923ab6d07a2c1ad9cff4754323abf6c5c08cf054a5"
	Dec 17 11:45:39 no-preload-118262 containerd[757]: time="2025-12-17T11:45:39.902323078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\""
	Dec 17 11:45:39 no-preload-118262 containerd[757]: time="2025-12-17T11:45:39.911666939Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:39 no-preload-118262 containerd[757]: time="2025-12-17T11:45:39.912612705Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:41 no-preload-118262 containerd[757]: time="2025-12-17T11:45:41.544997473Z" level=info msg="No images store for sha256:90c4ca45066b118d6cc8f6102ba2fea77739b71c04f0bdafeef225127738ea35"
	Dec 17 11:45:41 no-preload-118262 containerd[757]: time="2025-12-17T11:45:41.548274171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\""
	Dec 17 11:45:41 no-preload-118262 containerd[757]: time="2025-12-17T11:45:41.559283871Z" level=info msg="ImageCreate event name:\"sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:41 no-preload-118262 containerd[757]: time="2025-12-17T11:45:41.561918164Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:42 no-preload-118262 containerd[757]: time="2025-12-17T11:45:42.580017138Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 17 11:45:42 no-preload-118262 containerd[757]: time="2025-12-17T11:45:42.582800563Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 17 11:45:42 no-preload-118262 containerd[757]: time="2025-12-17T11:45:42.590535248Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:42 no-preload-118262 containerd[757]: time="2025-12-17T11:45:42.590987315Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 11:53:56.206542    5525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:56.207085    5525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:56.208789    5525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:56.209469    5525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:56.211032    5525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +26.532481] overlayfs: idmapped layers are currently not supported
	[Dec17 09:26] overlayfs: idmapped layers are currently not supported
	[Dec17 09:27] overlayfs: idmapped layers are currently not supported
	[Dec17 09:29] overlayfs: idmapped layers are currently not supported
	[Dec17 09:31] overlayfs: idmapped layers are currently not supported
	[Dec17 09:41] overlayfs: idmapped layers are currently not supported
	[Dec17 09:43] overlayfs: idmapped layers are currently not supported
	[Dec17 09:44] overlayfs: idmapped layers are currently not supported
	[  +5.066669] overlayfs: idmapped layers are currently not supported
	[ +38.827173] overlayfs: idmapped layers are currently not supported
	[Dec17 09:45] overlayfs: idmapped layers are currently not supported
	[Dec17 09:46] overlayfs: idmapped layers are currently not supported
	[Dec17 09:48] overlayfs: idmapped layers are currently not supported
	[  +5.468161] overlayfs: idmapped layers are currently not supported
	[Dec17 09:49] overlayfs: idmapped layers are currently not supported
	[  +4.263444] overlayfs: idmapped layers are currently not supported
	[Dec17 09:50] overlayfs: idmapped layers are currently not supported
	[Dec17 10:07] overlayfs: idmapped layers are currently not supported
	[Dec17 10:08] overlayfs: idmapped layers are currently not supported
	[Dec17 10:10] overlayfs: idmapped layers are currently not supported
	[Dec17 10:11] overlayfs: idmapped layers are currently not supported
	[Dec17 10:13] overlayfs: idmapped layers are currently not supported
	[Dec17 10:15] overlayfs: idmapped layers are currently not supported
	[Dec17 10:16] overlayfs: idmapped layers are currently not supported
	[Dec17 10:21] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 11:53:56 up 17:36,  0 user,  load average: 0.81, 1.21, 1.78
	Linux no-preload-118262 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 11:53:53 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:53:53 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 17 11:53:53 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:53:53 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:53:53 no-preload-118262 kubelet[5331]: E1217 11:53:53.801783    5331 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:53:53 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:53:53 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:53:54 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 17 11:53:54 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:53:54 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:53:54 no-preload-118262 kubelet[5336]: E1217 11:53:54.547367    5336 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:53:54 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:53:54 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:53:55 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 17 11:53:55 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:53:55 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:53:55 no-preload-118262 kubelet[5421]: E1217 11:53:55.325373    5421 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:53:55 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:53:55 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:53:55 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 17 11:53:55 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:53:56 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:53:56 no-preload-118262 kubelet[5490]: E1217 11:53:56.060640    5490 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:53:56 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:53:56 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-118262 -n no-preload-118262
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-118262 -n no-preload-118262: exit status 6 (351.735258ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 11:53:56.661204 3209827 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-118262" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-118262" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (514.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (501.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-669680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
E1217 11:50:43.082842 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:51:34.375790 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/old-k8s-version-112124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:52:31.283170 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:53:19.223526 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:53:36.153295 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:53:50.514915 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/old-k8s-version-112124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-669680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 109 (8m19.77310177s)

                                                
                                                
-- stdout --
	* [newest-cni-669680] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "newest-cni-669680" primary control-plane node in "newest-cni-669680" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:50:33.770675 3204903 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:50:33.770892 3204903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:50:33.770930 3204903 out.go:374] Setting ErrFile to fd 2...
	I1217 11:50:33.770950 3204903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:50:33.771242 3204903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 11:50:33.771720 3204903 out.go:368] Setting JSON to false
	I1217 11:50:33.772826 3204903 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":63184,"bootTime":1765909050,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 11:50:33.772934 3204903 start.go:143] virtualization:  
	I1217 11:50:33.777422 3204903 out.go:179] * [newest-cni-669680] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 11:50:33.781147 3204903 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 11:50:33.781258 3204903 notify.go:221] Checking for updates...
	I1217 11:50:33.787770 3204903 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:50:33.790969 3204903 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 11:50:33.794108 3204903 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 11:50:33.797396 3204903 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 11:50:33.800914 3204903 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:50:33.804694 3204903 config.go:182] Loaded profile config "no-preload-118262": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:50:33.804819 3204903 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:50:33.836693 3204903 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 11:50:33.836824 3204903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:50:33.905198 3204903 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:50:33.886446399 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:50:33.905310 3204903 docker.go:319] overlay module found
	I1217 11:50:33.908522 3204903 out.go:179] * Using the docker driver based on user configuration
	I1217 11:50:33.911483 3204903 start.go:309] selected driver: docker
	I1217 11:50:33.911512 3204903 start.go:927] validating driver "docker" against <nil>
	I1217 11:50:33.911528 3204903 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:50:33.912303 3204903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:50:33.968344 3204903 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:50:33.958386366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:50:33.968600 3204903 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1217 11:50:33.968643 3204903 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1217 11:50:33.968883 3204903 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 11:50:33.971848 3204903 out.go:179] * Using Docker driver with root privileges
	I1217 11:50:33.974707 3204903 cni.go:84] Creating CNI manager for ""
	I1217 11:50:33.974785 3204903 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 11:50:33.974803 3204903 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 11:50:33.974912 3204903 start.go:353] cluster config:
	{Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:50:33.980004 3204903 out.go:179] * Starting "newest-cni-669680" primary control-plane node in "newest-cni-669680" cluster
	I1217 11:50:33.982917 3204903 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 11:50:33.985952 3204903 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:50:33.988892 3204903 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 11:50:33.988945 3204903 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1217 11:50:33.988990 3204903 cache.go:65] Caching tarball of preloaded images
	I1217 11:50:33.989015 3204903 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:50:33.989111 3204903 preload.go:238] Found /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 11:50:33.989123 3204903 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1217 11:50:33.989239 3204903 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/config.json ...
	I1217 11:50:33.989268 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/config.json: {Name:mk0a64d844d14a82596feb52de4f9f10fa21ee9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:34.014470 3204903 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:50:34.014499 3204903 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 11:50:34.014516 3204903 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:50:34.014550 3204903 start.go:360] acquireMachinesLock for newest-cni-669680: {Name:mk48c8383b245a4b70f2208fe2e76b80693bbb09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:50:34.014670 3204903 start.go:364] duration metric: took 97.672µs to acquireMachinesLock for "newest-cni-669680"
	I1217 11:50:34.014703 3204903 start.go:93] Provisioning new machine with config: &{Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 11:50:34.014791 3204903 start.go:125] createHost starting for "" (driver="docker")
	I1217 11:50:34.018329 3204903 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 11:50:34.018591 3204903 start.go:159] libmachine.API.Create for "newest-cni-669680" (driver="docker")
	I1217 11:50:34.018632 3204903 client.go:173] LocalClient.Create starting
	I1217 11:50:34.018712 3204903 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem
	I1217 11:50:34.018752 3204903 main.go:143] libmachine: Decoding PEM data...
	I1217 11:50:34.018777 3204903 main.go:143] libmachine: Parsing certificate...
	I1217 11:50:34.018837 3204903 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem
	I1217 11:50:34.018864 3204903 main.go:143] libmachine: Decoding PEM data...
	I1217 11:50:34.018877 3204903 main.go:143] libmachine: Parsing certificate...
	I1217 11:50:34.019266 3204903 cli_runner.go:164] Run: docker network inspect newest-cni-669680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 11:50:34.036819 3204903 cli_runner.go:211] docker network inspect newest-cni-669680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 11:50:34.036904 3204903 network_create.go:284] running [docker network inspect newest-cni-669680] to gather additional debugging logs...
	I1217 11:50:34.036925 3204903 cli_runner.go:164] Run: docker network inspect newest-cni-669680
	W1217 11:50:34.053220 3204903 cli_runner.go:211] docker network inspect newest-cni-669680 returned with exit code 1
	I1217 11:50:34.053254 3204903 network_create.go:287] error running [docker network inspect newest-cni-669680]: docker network inspect newest-cni-669680: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-669680 not found
	I1217 11:50:34.053268 3204903 network_create.go:289] output of [docker network inspect newest-cni-669680]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-669680 not found
	
	** /stderr **
	I1217 11:50:34.053385 3204903 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:50:34.073660 3204903 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f429477a79c4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:ea:a9:f2:52:01} reservation:<nil>}
	I1217 11:50:34.074039 3204903 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e0545776686c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:70:9e:49:ed:7d} reservation:<nil>}
	I1217 11:50:34.074407 3204903 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-279becfad84b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:b7:62:6e:a9:ee} reservation:<nil>}
	I1217 11:50:34.074906 3204903 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a004b0}
	I1217 11:50:34.074944 3204903 network_create.go:124] attempt to create docker network newest-cni-669680 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1217 11:50:34.075027 3204903 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-669680 newest-cni-669680
	I1217 11:50:34.133594 3204903 network_create.go:108] docker network newest-cni-669680 192.168.76.0/24 created
	I1217 11:50:34.133624 3204903 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-669680" container
	I1217 11:50:34.133717 3204903 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 11:50:34.150255 3204903 cli_runner.go:164] Run: docker volume create newest-cni-669680 --label name.minikube.sigs.k8s.io=newest-cni-669680 --label created_by.minikube.sigs.k8s.io=true
	I1217 11:50:34.168619 3204903 oci.go:103] Successfully created a docker volume newest-cni-669680
	I1217 11:50:34.168718 3204903 cli_runner.go:164] Run: docker run --rm --name newest-cni-669680-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-669680 --entrypoint /usr/bin/test -v newest-cni-669680:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 11:50:34.733678 3204903 oci.go:107] Successfully prepared a docker volume newest-cni-669680
	I1217 11:50:34.733775 3204903 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 11:50:34.733794 3204903 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 11:50:34.733863 3204903 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-669680:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 11:50:38.835811 3204903 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-669680:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.101893526s)
	I1217 11:50:38.835847 3204903 kic.go:203] duration metric: took 4.10204956s to extract preloaded images to volume ...
	W1217 11:50:38.835990 3204903 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 11:50:38.836106 3204903 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 11:50:38.889030 3204903 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-669680 --name newest-cni-669680 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-669680 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-669680 --network newest-cni-669680 --ip 192.168.76.2 --volume newest-cni-669680:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 11:50:39.198138 3204903 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Running}}
	I1217 11:50:39.220630 3204903 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 11:50:39.245099 3204903 cli_runner.go:164] Run: docker exec newest-cni-669680 stat /var/lib/dpkg/alternatives/iptables
	I1217 11:50:39.296762 3204903 oci.go:144] the created container "newest-cni-669680" has a running status.
	I1217 11:50:39.296821 3204903 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519...
	I1217 11:50:39.301246 3204903 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519.pub --> /home/docker/.ssh/authorized_keys (81 bytes)
	I1217 11:50:39.326812 3204903 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 11:50:39.353133 3204903 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 11:50:39.353152 3204903 kic_runner.go:114] Args: [docker exec --privileged newest-cni-669680 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 11:50:39.407725 3204903 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 11:50:39.427651 3204903 machine.go:94] provisionDockerMachine start ...
	I1217 11:50:39.427814 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:39.449037 3204903 main.go:143] libmachine: Using SSH client type: native
	I1217 11:50:39.449153 3204903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36043 <nil> <nil>}
	I1217 11:50:39.449161 3204903 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:50:39.449689 3204903 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46730->127.0.0.1:36043: read: connection reset by peer
	I1217 11:50:42.588075 3204903 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-669680
	
	I1217 11:50:42.588101 3204903 ubuntu.go:182] provisioning hostname "newest-cni-669680"
	I1217 11:50:42.588181 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:42.610896 3204903 main.go:143] libmachine: Using SSH client type: native
	I1217 11:50:42.611004 3204903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36043 <nil> <nil>}
	I1217 11:50:42.611019 3204903 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-669680 && echo "newest-cni-669680" | sudo tee /etc/hostname
	I1217 11:50:42.758221 3204903 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-669680
	
	I1217 11:50:42.758323 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:42.776901 3204903 main.go:143] libmachine: Using SSH client type: native
	I1217 11:50:42.777030 3204903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36043 <nil> <nil>}
	I1217 11:50:42.777054 3204903 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-669680' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-669680/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-669680' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:50:42.909042 3204903 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:50:42.909069 3204903 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 11:50:42.909089 3204903 ubuntu.go:190] setting up certificates
	I1217 11:50:42.909098 3204903 provision.go:84] configureAuth start
	I1217 11:50:42.909162 3204903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 11:50:42.927260 3204903 provision.go:143] copyHostCerts
	I1217 11:50:42.927326 3204903 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 11:50:42.927335 3204903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 11:50:42.927414 3204903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 11:50:42.927515 3204903 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 11:50:42.927521 3204903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 11:50:42.927546 3204903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 11:50:42.927611 3204903 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 11:50:42.927615 3204903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 11:50:42.927639 3204903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 11:50:42.927694 3204903 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.newest-cni-669680 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-669680]
	I1217 11:50:43.131974 3204903 provision.go:177] copyRemoteCerts
	I1217 11:50:43.132056 3204903 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:50:43.132097 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:43.150139 3204903 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36043 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 11:50:43.244232 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 11:50:43.264226 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 11:50:43.288821 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 11:50:43.309853 3204903 provision.go:87] duration metric: took 400.734271ms to configureAuth
	I1217 11:50:43.309977 3204903 ubuntu.go:206] setting minikube options for container-runtime
	I1217 11:50:43.310242 3204903 config.go:182] Loaded profile config "newest-cni-669680": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:50:43.310275 3204903 machine.go:97] duration metric: took 3.882542746s to provisionDockerMachine
	I1217 11:50:43.310297 3204903 client.go:176] duration metric: took 9.291651647s to LocalClient.Create
	I1217 11:50:43.310348 3204903 start.go:167] duration metric: took 9.291744428s to libmachine.API.Create "newest-cni-669680"
	I1217 11:50:43.310376 3204903 start.go:293] postStartSetup for "newest-cni-669680" (driver="docker")
	I1217 11:50:43.310413 3204903 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 11:50:43.310526 3204903 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 11:50:43.310608 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:43.332854 3204903 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36043 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 11:50:43.428662 3204903 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 11:50:43.432294 3204903 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 11:50:43.432325 3204903 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 11:50:43.432337 3204903 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 11:50:43.432397 3204903 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 11:50:43.432547 3204903 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 11:50:43.432651 3204903 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 11:50:43.440004 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 11:50:43.457337 3204903 start.go:296] duration metric: took 146.930324ms for postStartSetup
	I1217 11:50:43.457705 3204903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 11:50:43.474473 3204903 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/config.json ...
	I1217 11:50:43.474760 3204903 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:50:43.474809 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:43.491508 3204903 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36043 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 11:50:43.585952 3204903 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 11:50:43.591149 3204903 start.go:128] duration metric: took 9.576343313s to createHost
	I1217 11:50:43.591175 3204903 start.go:83] releasing machines lock for "newest-cni-669680", held for 9.576490895s
	I1217 11:50:43.591260 3204903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 11:50:43.609275 3204903 ssh_runner.go:195] Run: cat /version.json
	I1217 11:50:43.609319 3204903 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 11:50:43.609330 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:43.609377 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:43.631237 3204903 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36043 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 11:50:43.636805 3204903 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36043 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 11:50:43.724501 3204903 ssh_runner.go:195] Run: systemctl --version
	I1217 11:50:43.819531 3204903 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 11:50:43.823938 3204903 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 11:50:43.824018 3204903 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 11:50:43.852205 3204903 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 11:50:43.852281 3204903 start.go:496] detecting cgroup driver to use...
	I1217 11:50:43.852330 3204903 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 11:50:43.852407 3204903 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 11:50:43.867801 3204903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 11:50:43.881415 3204903 docker.go:218] disabling cri-docker service (if available) ...
	I1217 11:50:43.881505 3204903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 11:50:43.898869 3204903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 11:50:43.917331 3204903 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 11:50:44.042660 3204903 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 11:50:44.176397 3204903 docker.go:234] disabling docker service ...
	I1217 11:50:44.176490 3204903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 11:50:44.197465 3204903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 11:50:44.211041 3204903 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 11:50:44.324043 3204903 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 11:50:44.437310 3204903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 11:50:44.451253 3204903 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 11:50:44.468227 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 11:50:44.477660 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 11:50:44.487940 3204903 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 11:50:44.488046 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 11:50:44.497581 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 11:50:44.506638 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 11:50:44.516061 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 11:50:44.524921 3204903 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 11:50:44.533457 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 11:50:44.542606 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 11:50:44.551989 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 11:50:44.561578 3204903 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 11:50:44.570051 3204903 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 11:50:44.577822 3204903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:50:44.688867 3204903 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 11:50:44.840667 3204903 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 11:50:44.840788 3204903 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 11:50:44.845363 3204903 start.go:564] Will wait 60s for crictl version
	I1217 11:50:44.845485 3204903 ssh_runner.go:195] Run: which crictl
	I1217 11:50:44.849376 3204903 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 11:50:44.883387 3204903 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 11:50:44.883511 3204903 ssh_runner.go:195] Run: containerd --version
	I1217 11:50:44.905807 3204903 ssh_runner.go:195] Run: containerd --version
	I1217 11:50:44.930438 3204903 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1217 11:50:44.933446 3204903 cli_runner.go:164] Run: docker network inspect newest-cni-669680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:50:44.950378 3204903 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 11:50:44.954519 3204903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:50:44.968645 3204903 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 11:50:44.971593 3204903 kubeadm.go:884] updating cluster {Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:50:44.971744 3204903 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 11:50:44.971843 3204903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:50:45.011583 3204903 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 11:50:45.011607 3204903 containerd.go:534] Images already preloaded, skipping extraction
	I1217 11:50:45.011729 3204903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:50:45.074368 3204903 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 11:50:45.074393 3204903 cache_images.go:86] Images are preloaded, skipping loading
	I1217 11:50:45.074401 3204903 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1217 11:50:45.074511 3204903 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-669680 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 11:50:45.074583 3204903 ssh_runner.go:195] Run: sudo crictl info
	I1217 11:50:45.124774 3204903 cni.go:84] Creating CNI manager for ""
	I1217 11:50:45.124803 3204903 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 11:50:45.124823 3204903 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 11:50:45.124848 3204903 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-669680 NodeName:newest-cni-669680 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:50:45.125086 3204903 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-669680"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:50:45.125178 3204903 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 11:50:45.136963 3204903 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 11:50:45.137076 3204903 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:50:45.146865 3204903 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1217 11:50:45.164693 3204903 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 11:50:45.183943 3204903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1217 11:50:45.201780 3204903 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:50:45.209411 3204903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:50:45.227993 3204903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:50:45.376186 3204903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:50:45.396221 3204903 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680 for IP: 192.168.76.2
	I1217 11:50:45.396257 3204903 certs.go:195] generating shared ca certs ...
	I1217 11:50:45.396275 3204903 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:45.396432 3204903 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 11:50:45.396497 3204903 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 11:50:45.396511 3204903 certs.go:257] generating profile certs ...
	I1217 11:50:45.396576 3204903 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.key
	I1217 11:50:45.396594 3204903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.crt with IP's: []
	I1217 11:50:45.498992 3204903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.crt ...
	I1217 11:50:45.499023 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.crt: {Name:mkfb66bec095c72b7c1a0e563529baf2180c300c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:45.499228 3204903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.key ...
	I1217 11:50:45.499243 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.key: {Name:mk7292acf4e53dd5012d44cc923a43c80ae9a7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:45.499340 3204903 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key.e7646161
	I1217 11:50:45.499360 3204903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt.e7646161 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1217 11:50:45.885492 3204903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt.e7646161 ...
	I1217 11:50:45.885525 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt.e7646161: {Name:mkc2aab84e543777fe00770e300fac9f47cd579f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:45.885732 3204903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key.e7646161 ...
	I1217 11:50:45.885749 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key.e7646161: {Name:mk25ae271c13c745dd8ef046c320963d505be1ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:45.885837 3204903 certs.go:382] copying /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt.e7646161 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt
	I1217 11:50:45.885921 3204903 certs.go:386] copying /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key.e7646161 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key
	I1217 11:50:45.885986 3204903 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key
	I1217 11:50:45.886007 3204903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.crt with IP's: []
	I1217 11:50:46.187502 3204903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.crt ...
	I1217 11:50:46.187541 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.crt: {Name:mk12f9e3a4ac82afa8ef3e938731ab0419f581a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:46.187741 3204903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key ...
	I1217 11:50:46.187756 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key: {Name:mk258e31e31368b8ae182e758b28fd15f98dabb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:46.187958 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 11:50:46.188008 3204903 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 11:50:46.188031 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:50:46.188065 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 11:50:46.188095 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:50:46.188125 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 11:50:46.188174 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 11:50:46.188855 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:50:46.209179 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:50:46.229625 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:50:46.248348 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 11:50:46.280053 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 11:50:46.299540 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 11:50:46.331420 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:50:46.354812 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 11:50:46.379741 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:50:46.398349 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 11:50:46.416502 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 11:50:46.434656 3204903 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:50:46.448045 3204903 ssh_runner.go:195] Run: openssl version
	I1217 11:50:46.454404 3204903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:50:46.462383 3204903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:50:46.470220 3204903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:50:46.474117 3204903 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:50:46.474205 3204903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:50:46.515776 3204903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:50:46.523521 3204903 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 11:50:46.531167 3204903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 11:50:46.538808 3204903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 11:50:46.546526 3204903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 11:50:46.550351 3204903 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 11:50:46.550420 3204903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 11:50:46.591537 3204903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:50:46.599582 3204903 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2924574.pem /etc/ssl/certs/51391683.0
	I1217 11:50:46.607230 3204903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 11:50:46.615038 3204903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 11:50:46.623144 3204903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 11:50:46.627219 3204903 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 11:50:46.627293 3204903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 11:50:46.668548 3204903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:50:46.676480 3204903 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/29245742.pem /etc/ssl/certs/3ec20f2e.0
	I1217 11:50:46.684254 3204903 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:50:46.687989 3204903 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 11:50:46.688053 3204903 kubeadm.go:401] StartCluster: {Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:50:46.688186 3204903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 11:50:46.688251 3204903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:50:46.715504 3204903 cri.go:89] found id: ""
	I1217 11:50:46.715577 3204903 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:50:46.723636 3204903 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 11:50:46.731913 3204903 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 11:50:46.732013 3204903 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:50:46.740391 3204903 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 11:50:46.740486 3204903 kubeadm.go:158] found existing configuration files:
	
	I1217 11:50:46.740554 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 11:50:46.748658 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 11:50:46.748734 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 11:50:46.756251 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 11:50:46.764744 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 11:50:46.764812 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 11:50:46.772495 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 11:50:46.780304 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 11:50:46.780374 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:50:46.787858 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 11:50:46.795827 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 11:50:46.795920 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:50:46.803940 3204903 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 11:50:46.842300 3204903 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 11:50:46.842364 3204903 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 11:50:46.914982 3204903 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 11:50:46.915066 3204903 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 11:50:46.915120 3204903 kubeadm.go:319] OS: Linux
	I1217 11:50:46.915224 3204903 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 11:50:46.915306 3204903 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 11:50:46.915380 3204903 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 11:50:46.915458 3204903 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 11:50:46.915534 3204903 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 11:50:46.915612 3204903 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 11:50:46.915688 3204903 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 11:50:46.915760 3204903 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 11:50:46.915833 3204903 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 11:50:46.991927 3204903 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 11:50:46.992117 3204903 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 11:50:46.992264 3204903 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 11:50:47.011559 3204903 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 11:50:47.018032 3204903 out.go:252]   - Generating certificates and keys ...
	I1217 11:50:47.018195 3204903 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 11:50:47.018301 3204903 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 11:50:47.129470 3204903 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 11:50:47.445618 3204903 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 11:50:47.915158 3204903 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 11:50:48.499656 3204903 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 11:50:48.596834 3204903 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 11:50:48.597124 3204903 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-669680] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 11:50:48.753661 3204903 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 11:50:48.754010 3204903 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-669680] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 11:50:48.982189 3204903 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 11:50:49.176711 3204903 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 11:50:49.329925 3204903 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 11:50:49.330545 3204903 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 11:50:49.669219 3204903 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 11:50:49.769896 3204903 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 11:50:50.134620 3204903 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 11:50:50.518232 3204903 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 11:50:51.159536 3204903 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 11:50:51.160438 3204903 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 11:50:51.163380 3204903 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 11:50:51.167113 3204903 out.go:252]   - Booting up control plane ...
	I1217 11:50:51.167275 3204903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 11:50:51.167359 3204903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 11:50:51.168888 3204903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 11:50:51.187617 3204903 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 11:50:51.187958 3204903 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 11:50:51.195573 3204903 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 11:50:51.195900 3204903 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 11:50:51.195946 3204903 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 11:50:51.332866 3204903 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 11:50:51.332987 3204903 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 11:54:51.328759 3204903 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001152207s
	I1217 11:54:51.328801 3204903 kubeadm.go:319] 
	I1217 11:54:51.328906 3204903 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 11:54:51.328965 3204903 kubeadm.go:319] 	- The kubelet is not running
	I1217 11:54:51.329441 3204903 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 11:54:51.329455 3204903 kubeadm.go:319] 
	I1217 11:54:51.329644 3204903 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 11:54:51.329715 3204903 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 11:54:51.329900 3204903 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 11:54:51.329906 3204903 kubeadm.go:319] 
	I1217 11:54:51.334038 3204903 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 11:54:51.334499 3204903 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 11:54:51.334619 3204903 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:54:51.334877 3204903 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 11:54:51.334886 3204903 kubeadm.go:319] 
	I1217 11:54:51.334961 3204903 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1217 11:54:51.335076 3204903 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-669680] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-669680] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001152207s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-669680] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-669680] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001152207s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 11:54:51.335168 3204903 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 11:54:51.745608 3204903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:54:51.758936 3204903 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 11:54:51.759045 3204903 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:54:51.767791 3204903 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 11:54:51.767821 3204903 kubeadm.go:158] found existing configuration files:
	
	I1217 11:54:51.767929 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 11:54:51.776485 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 11:54:51.776552 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 11:54:51.784313 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 11:54:51.792583 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 11:54:51.792692 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 11:54:51.800824 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 11:54:51.809125 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 11:54:51.809247 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:54:51.818264 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 11:54:51.826373 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 11:54:51.826439 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:54:51.834569 3204903 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 11:54:51.873437 3204903 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 11:54:51.873499 3204903 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 11:54:51.944757 3204903 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 11:54:51.944829 3204903 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 11:54:51.944868 3204903 kubeadm.go:319] OS: Linux
	I1217 11:54:51.944915 3204903 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 11:54:51.944965 3204903 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 11:54:51.945013 3204903 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 11:54:51.945062 3204903 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 11:54:51.945112 3204903 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 11:54:51.945161 3204903 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 11:54:51.945207 3204903 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 11:54:51.945256 3204903 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 11:54:51.945304 3204903 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 11:54:52.011393 3204903 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 11:54:52.011506 3204903 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 11:54:52.011597 3204903 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 11:54:52.018267 3204903 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 11:54:52.021826 3204903 out.go:252]   - Generating certificates and keys ...
	I1217 11:54:52.021926 3204903 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 11:54:52.022003 3204903 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 11:54:52.022098 3204903 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 11:54:52.022197 3204903 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 11:54:52.022313 3204903 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 11:54:52.022392 3204903 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 11:54:52.023051 3204903 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 11:54:52.023420 3204903 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 11:54:52.023720 3204903 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 11:54:52.024098 3204903 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 11:54:52.024395 3204903 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 11:54:52.024488 3204903 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 11:54:52.154533 3204903 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 11:54:52.254828 3204903 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 11:54:52.520215 3204903 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 11:54:52.620865 3204903 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 11:54:52.853590 3204903 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 11:54:52.854283 3204903 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 11:54:52.857519 3204903 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 11:54:52.860706 3204903 out.go:252]   - Booting up control plane ...
	I1217 11:54:52.860973 3204903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 11:54:52.861072 3204903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 11:54:52.862252 3204903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 11:54:52.883837 3204903 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 11:54:52.883954 3204903 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 11:54:52.891508 3204903 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 11:54:52.891860 3204903 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 11:54:52.891912 3204903 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 11:54:53.027569 3204903 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 11:54:53.027697 3204903 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 11:58:53.028972 3204903 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001753064s
	I1217 11:58:53.029259 3204903 kubeadm.go:319] 
	I1217 11:58:53.029324 3204903 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 11:58:53.029359 3204903 kubeadm.go:319] 	- The kubelet is not running
	I1217 11:58:53.029464 3204903 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 11:58:53.029468 3204903 kubeadm.go:319] 
	I1217 11:58:53.029572 3204903 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 11:58:53.029604 3204903 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 11:58:53.029645 3204903 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 11:58:53.029650 3204903 kubeadm.go:319] 
	I1217 11:58:53.035722 3204903 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 11:58:53.036145 3204903 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 11:58:53.036254 3204903 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:58:53.036508 3204903 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 11:58:53.036516 3204903 kubeadm.go:319] 
	I1217 11:58:53.036585 3204903 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 11:58:53.036636 3204903 kubeadm.go:403] duration metric: took 8m6.348588119s to StartCluster
	I1217 11:58:53.036680 3204903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:58:53.036746 3204903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:58:53.092234 3204903 cri.go:89] found id: ""
	I1217 11:58:53.092255 3204903 logs.go:282] 0 containers: []
	W1217 11:58:53.092264 3204903 logs.go:284] No container was found matching "kube-apiserver"
	I1217 11:58:53.092270 3204903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:58:53.092329 3204903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:58:53.120381 3204903 cri.go:89] found id: ""
	I1217 11:58:53.120404 3204903 logs.go:282] 0 containers: []
	W1217 11:58:53.120412 3204903 logs.go:284] No container was found matching "etcd"
	I1217 11:58:53.120440 3204903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:58:53.120504 3204903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:58:53.150913 3204903 cri.go:89] found id: ""
	I1217 11:58:53.150935 3204903 logs.go:282] 0 containers: []
	W1217 11:58:53.150943 3204903 logs.go:284] No container was found matching "coredns"
	I1217 11:58:53.150949 3204903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:58:53.151010 3204903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:58:53.177002 3204903 cri.go:89] found id: ""
	I1217 11:58:53.177028 3204903 logs.go:282] 0 containers: []
	W1217 11:58:53.177037 3204903 logs.go:284] No container was found matching "kube-scheduler"
	I1217 11:58:53.177044 3204903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:58:53.177105 3204903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:58:53.202075 3204903 cri.go:89] found id: ""
	I1217 11:58:53.202101 3204903 logs.go:282] 0 containers: []
	W1217 11:58:53.202109 3204903 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:58:53.202116 3204903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:58:53.202175 3204903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:58:53.230674 3204903 cri.go:89] found id: ""
	I1217 11:58:53.230701 3204903 logs.go:282] 0 containers: []
	W1217 11:58:53.230709 3204903 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 11:58:53.230716 3204903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:58:53.230773 3204903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:58:53.256007 3204903 cri.go:89] found id: ""
	I1217 11:58:53.256034 3204903 logs.go:282] 0 containers: []
	W1217 11:58:53.256042 3204903 logs.go:284] No container was found matching "kindnet"
	I1217 11:58:53.256053 3204903 logs.go:123] Gathering logs for kubelet ...
	I1217 11:58:53.256065 3204903 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:58:53.314487 3204903 logs.go:123] Gathering logs for dmesg ...
	I1217 11:58:53.314524 3204903 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:58:53.331203 3204903 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:58:53.331240 3204903 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:58:53.399250 3204903 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 11:58:53.390312    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:53.390861    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:53.392589    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:53.393116    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:53.394713    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 11:58:53.390312    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:53.390861    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:53.392589    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:53.393116    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:53.394713    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:58:53.399274 3204903 logs.go:123] Gathering logs for containerd ...
	I1217 11:58:53.399288 3204903 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:58:53.439803 3204903 logs.go:123] Gathering logs for container status ...
	I1217 11:58:53.439840 3204903 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 11:58:53.468929 3204903 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001753064s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 11:58:53.469033 3204903 out.go:285] * 
	* 
	W1217 11:58:53.469130 3204903 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001753064s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001753064s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 11:58:53.469177 3204903 out.go:285] * 
	* 
	W1217 11:58:53.471457 3204903 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:58:53.476865 3204903 out.go:203] 
	W1217 11:58:53.479927 3204903 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001753064s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001753064s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 11:58:53.479964 3204903 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 11:58:53.479988 3204903 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 11:58:53.483089 3204903 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p newest-cni-669680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-669680
helpers_test.go:244: (dbg) docker inspect newest-cni-669680:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc",
	        "Created": "2025-12-17T11:50:38.904543162Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3205329,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:50:38.98558565Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc/hostname",
	        "HostsPath": "/var/lib/docker/containers/23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc/hosts",
	        "LogPath": "/var/lib/docker/containers/23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc/23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc-json.log",
	        "Name": "/newest-cni-669680",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-669680:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-669680",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc",
	                "LowerDir": "/var/lib/docker/overlay2/0a323466537bd2b84d1c1de1e658b41a849cb439639dbed3e328ce82ab171fd3-init/diff:/var/lib/docker/overlay2/aa1c3cb837db05afa9c265c464cc269fa9c11658f422c1c8858e1287ac952f12/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0a323466537bd2b84d1c1de1e658b41a849cb439639dbed3e328ce82ab171fd3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0a323466537bd2b84d1c1de1e658b41a849cb439639dbed3e328ce82ab171fd3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0a323466537bd2b84d1c1de1e658b41a849cb439639dbed3e328ce82ab171fd3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-669680",
	                "Source": "/var/lib/docker/volumes/newest-cni-669680/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-669680",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-669680",
	                "name.minikube.sigs.k8s.io": "newest-cni-669680",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b6274c51925abac74af91e3b11cb0a4d5cf37e009a5faa7c8800fc2099930727",
	            "SandboxKey": "/var/run/docker/netns/b6274c51925a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36043"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36044"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36047"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36045"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36046"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-669680": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:80:8e:4b:68:67",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e84740d61c89f51b13c32d88b9c5aafc9e8e1ba5e275e3db72c9a38077e44a94",
	                    "EndpointID": "de0b9853e35e2b17e7ac367a79084e643b7446b0efa3f0d2161f29a374748652",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-669680",
	                        "23474ef32ddb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-669680 -n newest-cni-669680
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-669680 -n newest-cni-669680: exit status 6 (328.949528ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 11:58:53.882410 3217248 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-669680" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-669680 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ addons  │ enable metrics-server -p embed-certs-628462 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                 │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:46 UTC │ 17 Dec 25 11:46 UTC │
	│ stop    │ -p embed-certs-628462 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:46 UTC │ 17 Dec 25 11:47 UTC │
	│ addons  │ enable dashboard -p embed-certs-628462 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │ 17 Dec 25 11:47 UTC │
	│ start   │ -p embed-certs-628462 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │ 17 Dec 25 11:47 UTC │
	│ image   │ embed-certs-628462 image list --format=json                                                                                                                                                                                                              │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ pause   │ -p embed-certs-628462 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ unpause │ -p embed-certs-628462 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p embed-certs-628462                                                                                                                                                                                                                                    │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p embed-certs-628462                                                                                                                                                                                                                                    │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p disable-driver-mounts-003095                                                                                                                                                                                                                          │ disable-driver-mounts-003095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ start   │ -p default-k8s-diff-port-224095 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:49 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-224095 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ stop    │ -p default-k8s-diff-port-224095 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-224095 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ start   │ -p default-k8s-diff-port-224095 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:50 UTC │
	│ image   │ default-k8s-diff-port-224095 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ pause   │ -p default-k8s-diff-port-224095 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ unpause │ -p default-k8s-diff-port-224095 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ delete  │ -p default-k8s-diff-port-224095                                                                                                                                                                                                                          │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ delete  │ -p default-k8s-diff-port-224095                                                                                                                                                                                                                          │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ start   │ -p newest-cni-669680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-118262 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	│ stop    │ -p no-preload-118262 --alsologtostderr -v=3                                                                                                                                                                                                              │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ addons  │ enable dashboard -p no-preload-118262 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ start   │ -p no-preload-118262 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:55:54
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:55:54.097672 3212985 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:55:54.097800 3212985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:55:54.097813 3212985 out.go:374] Setting ErrFile to fd 2...
	I1217 11:55:54.097821 3212985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:55:54.098207 3212985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 11:55:54.098683 3212985 out.go:368] Setting JSON to false
	I1217 11:55:54.100030 3212985 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":63504,"bootTime":1765909050,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 11:55:54.100100 3212985 start.go:143] virtualization:  
	I1217 11:55:54.103066 3212985 out.go:179] * [no-preload-118262] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 11:55:54.106891 3212985 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 11:55:54.107071 3212985 notify.go:221] Checking for updates...
	I1217 11:55:54.112925 3212985 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:55:54.115810 3212985 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 11:55:54.118639 3212985 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 11:55:54.121576 3212985 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 11:55:54.124535 3212985 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:55:54.127987 3212985 config.go:182] Loaded profile config "no-preload-118262": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:55:54.128592 3212985 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:55:54.156670 3212985 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 11:55:54.156789 3212985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:55:54.209915 3212985 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:55:54.200713336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:55:54.210021 3212985 docker.go:319] overlay module found
	I1217 11:55:54.213155 3212985 out.go:179] * Using the docker driver based on existing profile
	I1217 11:55:54.215964 3212985 start.go:309] selected driver: docker
	I1217 11:55:54.215988 3212985 start.go:927] validating driver "docker" against &{Name:no-preload-118262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-118262 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:55:54.216113 3212985 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:55:54.217029 3212985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:55:54.283254 3212985 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:55:54.273141828 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:55:54.283582 3212985 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:55:54.283614 3212985 cni.go:84] Creating CNI manager for ""
	I1217 11:55:54.283667 3212985 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 11:55:54.283720 3212985 start.go:353] cluster config:
	{Name:no-preload-118262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-118262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:55:54.286974 3212985 out.go:179] * Starting "no-preload-118262" primary control-plane node in "no-preload-118262" cluster
	I1217 11:55:54.289858 3212985 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 11:55:54.292806 3212985 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:55:54.295787 3212985 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 11:55:54.295891 3212985 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:55:54.295951 3212985 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/config.json ...
	I1217 11:55:54.296271 3212985 cache.go:107] acquiring lock: {Name:mk815fc0c67b76ed2ee0b075f6917d43e67b13d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.296357 3212985 cache.go:115] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1217 11:55:54.296371 3212985 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 114.017µs
	I1217 11:55:54.296388 3212985 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1217 11:55:54.296401 3212985 cache.go:107] acquiring lock: {Name:mk11644c35fa0d35fcf9d5a865af6c28a7df16d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.296484 3212985 cache.go:115] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1217 11:55:54.296496 3212985 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 96.622µs
	I1217 11:55:54.296503 3212985 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1217 11:55:54.296515 3212985 cache.go:107] acquiring lock: {Name:mk02712d952db0244ab56f62810e58a983831503 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.296551 3212985 cache.go:115] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1217 11:55:54.296561 3212985 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 47.679µs
	I1217 11:55:54.296569 3212985 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1217 11:55:54.296591 3212985 cache.go:107] acquiring lock: {Name:mk436387f099b91bd6762b69e3678ebc0f9561cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.296627 3212985 cache.go:115] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1217 11:55:54.296637 3212985 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 52.323µs
	I1217 11:55:54.296644 3212985 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1217 11:55:54.296653 3212985 cache.go:107] acquiring lock: {Name:mkf4cd732ad0857bbeaf7d91402ed78da15112e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.296678 3212985 cache.go:115] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1217 11:55:54.296683 3212985 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 32.098µs
	I1217 11:55:54.296690 3212985 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1217 11:55:54.296699 3212985 cache.go:107] acquiring lock: {Name:mka934c06f25efbc149ef4769eaae5adad4ea53a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.296728 3212985 cache.go:115] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1217 11:55:54.296733 3212985 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 35.873µs
	I1217 11:55:54.296739 3212985 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1217 11:55:54.296748 3212985 cache.go:107] acquiring lock: {Name:mkb53641077bc34de612e9b78566264ac82d9b73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.296778 3212985 cache.go:115] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1217 11:55:54.296787 3212985 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 39.884µs
	I1217 11:55:54.296793 3212985 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1217 11:55:54.296801 3212985 cache.go:107] acquiring lock: {Name:mkca0a51840ba852f371cde8bcc41ec807c30a00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.296838 3212985 cache.go:115] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1217 11:55:54.296847 3212985 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 46.588µs
	I1217 11:55:54.296856 3212985 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1217 11:55:54.296862 3212985 cache.go:87] Successfully saved all images to host disk.
	I1217 11:55:54.316030 3212985 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:55:54.316051 3212985 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 11:55:54.316070 3212985 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:55:54.316101 3212985 start.go:360] acquireMachinesLock for no-preload-118262: {Name:mka8b15d744256405cc79d3bb936a81c229c3b9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.316161 3212985 start.go:364] duration metric: took 39.77µs to acquireMachinesLock for "no-preload-118262"
	I1217 11:55:54.316185 3212985 start.go:96] Skipping create...Using existing machine configuration
	I1217 11:55:54.316190 3212985 fix.go:54] fixHost starting: 
	I1217 11:55:54.316490 3212985 cli_runner.go:164] Run: docker container inspect no-preload-118262 --format={{.State.Status}}
	I1217 11:55:54.332759 3212985 fix.go:112] recreateIfNeeded on no-preload-118262: state=Stopped err=<nil>
	W1217 11:55:54.332793 3212985 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 11:55:54.338085 3212985 out.go:252] * Restarting existing docker container for "no-preload-118262" ...
	I1217 11:55:54.338180 3212985 cli_runner.go:164] Run: docker start no-preload-118262
	I1217 11:55:54.606459 3212985 cli_runner.go:164] Run: docker container inspect no-preload-118262 --format={{.State.Status}}
	I1217 11:55:54.631006 3212985 kic.go:432] container "no-preload-118262" state is running.
	I1217 11:55:54.631393 3212985 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-118262
	I1217 11:55:54.652451 3212985 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/config.json ...
	I1217 11:55:54.652674 3212985 machine.go:94] provisionDockerMachine start ...
	I1217 11:55:54.652732 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:55:54.677707 3212985 main.go:143] libmachine: Using SSH client type: native
	I1217 11:55:54.677814 3212985 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36048 <nil> <nil>}
	I1217 11:55:54.677822 3212985 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:55:54.678839 3212985 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 11:55:57.812197 3212985 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-118262
	
	I1217 11:55:57.812222 3212985 ubuntu.go:182] provisioning hostname "no-preload-118262"
	I1217 11:55:57.812295 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:55:57.830852 3212985 main.go:143] libmachine: Using SSH client type: native
	I1217 11:55:57.830954 3212985 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36048 <nil> <nil>}
	I1217 11:55:57.830964 3212985 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-118262 && echo "no-preload-118262" | sudo tee /etc/hostname
	I1217 11:55:57.977757 3212985 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-118262
	
	I1217 11:55:57.977834 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:55:57.995318 3212985 main.go:143] libmachine: Using SSH client type: native
	I1217 11:55:57.995438 3212985 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36048 <nil> <nil>}
	I1217 11:55:57.995454 3212985 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-118262' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-118262/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-118262' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:55:58.129000 3212985 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:55:58.129026 3212985 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 11:55:58.129048 3212985 ubuntu.go:190] setting up certificates
	I1217 11:55:58.129058 3212985 provision.go:84] configureAuth start
	I1217 11:55:58.129137 3212985 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-118262
	I1217 11:55:58.146660 3212985 provision.go:143] copyHostCerts
	I1217 11:55:58.146727 3212985 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 11:55:58.146737 3212985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 11:55:58.146821 3212985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 11:55:58.146983 3212985 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 11:55:58.146989 3212985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 11:55:58.147018 3212985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 11:55:58.147082 3212985 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 11:55:58.147087 3212985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 11:55:58.147112 3212985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 11:55:58.147174 3212985 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.no-preload-118262 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-118262]
	I1217 11:55:58.677412 3212985 provision.go:177] copyRemoteCerts
	I1217 11:55:58.677487 3212985 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:55:58.677537 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:55:58.696153 3212985 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36048 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:55:58.796388 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 11:55:58.813975 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 11:55:58.831950 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 11:55:58.849412 3212985 provision.go:87] duration metric: took 720.33021ms to configureAuth
	I1217 11:55:58.849488 3212985 ubuntu.go:206] setting minikube options for container-runtime
	I1217 11:55:58.849743 3212985 config.go:182] Loaded profile config "no-preload-118262": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:55:58.849760 3212985 machine.go:97] duration metric: took 4.197077033s to provisionDockerMachine
	I1217 11:55:58.849769 3212985 start.go:293] postStartSetup for "no-preload-118262" (driver="docker")
	I1217 11:55:58.849784 3212985 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 11:55:58.849838 3212985 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 11:55:58.849879 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:55:58.867585 3212985 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36048 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:55:58.964748 3212985 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 11:55:58.968333 3212985 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 11:55:58.968360 3212985 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 11:55:58.968372 3212985 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 11:55:58.968454 3212985 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 11:55:58.968542 3212985 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 11:55:58.968640 3212985 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 11:55:58.976685 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 11:55:58.995461 3212985 start.go:296] duration metric: took 145.672692ms for postStartSetup
	I1217 11:55:58.995541 3212985 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:55:58.995586 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:55:59.019538 3212985 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36048 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:55:59.117541 3212985 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 11:55:59.122267 3212985 fix.go:56] duration metric: took 4.80606985s for fixHost
	I1217 11:55:59.122307 3212985 start.go:83] releasing machines lock for "no-preload-118262", held for 4.806123002s
	I1217 11:55:59.122382 3212985 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-118262
	I1217 11:55:59.141556 3212985 ssh_runner.go:195] Run: cat /version.json
	I1217 11:55:59.141603 3212985 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 11:55:59.141611 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:55:59.141660 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:55:59.164620 3212985 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36048 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:55:59.164771 3212985 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36048 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:55:59.351128 3212985 ssh_runner.go:195] Run: systemctl --version
	I1217 11:55:59.358084 3212985 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 11:55:59.362663 3212985 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 11:55:59.362766 3212985 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 11:55:59.371125 3212985 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 11:55:59.371153 3212985 start.go:496] detecting cgroup driver to use...
	I1217 11:55:59.371206 3212985 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 11:55:59.371277 3212985 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 11:55:59.389287 3212985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 11:55:59.403831 3212985 docker.go:218] disabling cri-docker service (if available) ...
	I1217 11:55:59.403893 3212985 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 11:55:59.419497 3212985 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 11:55:59.432548 3212985 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 11:55:59.542751 3212985 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 11:55:59.663663 3212985 docker.go:234] disabling docker service ...
	I1217 11:55:59.663734 3212985 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 11:55:59.680687 3212985 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 11:55:59.694833 3212985 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 11:55:59.829203 3212985 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 11:55:59.950677 3212985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 11:55:59.964080 3212985 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 11:55:59.978475 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 11:55:59.987229 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 11:55:59.996111 3212985 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 11:55:59.996210 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 11:56:00.040190 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 11:56:00.080003 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 11:56:00.111408 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 11:56:00.135837 3212985 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 11:56:00.154709 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 11:56:00.192639 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 11:56:00.215745 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 11:56:00.252832 3212985 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 11:56:00.276526 3212985 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 11:56:00.295796 3212985 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:56:00.437457 3212985 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 11:56:00.567606 3212985 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 11:56:00.567753 3212985 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 11:56:00.572865 3212985 start.go:564] Will wait 60s for crictl version
	I1217 11:56:00.572972 3212985 ssh_runner.go:195] Run: which crictl
	I1217 11:56:00.577625 3212985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 11:56:00.604322 3212985 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 11:56:00.604502 3212985 ssh_runner.go:195] Run: containerd --version
	I1217 11:56:00.631560 3212985 ssh_runner.go:195] Run: containerd --version
	I1217 11:56:00.656469 3212985 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1217 11:56:00.659351 3212985 cli_runner.go:164] Run: docker network inspect no-preload-118262 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:56:00.676349 3212985 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 11:56:00.680496 3212985 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:56:00.690917 3212985 kubeadm.go:884] updating cluster {Name:no-preload-118262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-118262 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:56:00.691048 3212985 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 11:56:00.691104 3212985 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:56:00.720555 3212985 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 11:56:00.720582 3212985 cache_images.go:86] Images are preloaded, skipping loading
	I1217 11:56:00.720590 3212985 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1217 11:56:00.720694 3212985 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-118262 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-118262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 11:56:00.720772 3212985 ssh_runner.go:195] Run: sudo crictl info
	I1217 11:56:00.749210 3212985 cni.go:84] Creating CNI manager for ""
	I1217 11:56:00.749238 3212985 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 11:56:00.749254 3212985 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 11:56:00.749310 3212985 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-118262 NodeName:no-preload-118262 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:56:00.749505 3212985 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-118262"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:56:00.749576 3212985 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 11:56:00.757442 3212985 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 11:56:00.757524 3212985 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:56:00.765473 3212985 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1217 11:56:00.778740 3212985 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 11:56:00.792394 3212985 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1217 11:56:00.806454 3212985 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:56:00.810279 3212985 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:56:00.820510 3212985 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:56:00.934464 3212985 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:56:00.950819 3212985 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262 for IP: 192.168.85.2
	I1217 11:56:00.950891 3212985 certs.go:195] generating shared ca certs ...
	I1217 11:56:00.950922 3212985 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:00.951114 3212985 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 11:56:00.951194 3212985 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 11:56:00.951232 3212985 certs.go:257] generating profile certs ...
	I1217 11:56:00.951382 3212985 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/client.key
	I1217 11:56:00.951530 3212985 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/apiserver.key.082f94c0
	I1217 11:56:00.951606 3212985 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/proxy-client.key
	I1217 11:56:00.951762 3212985 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 11:56:00.951827 3212985 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 11:56:00.951867 3212985 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:56:00.951923 3212985 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 11:56:00.952000 3212985 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:56:00.952049 3212985 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 11:56:00.952133 3212985 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 11:56:00.952760 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:56:00.978711 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:56:00.997524 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:56:01.017452 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 11:56:01.037516 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 11:56:01.055698 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 11:56:01.078786 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:56:01.098475 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 11:56:01.116977 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 11:56:01.136015 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:56:01.156004 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 11:56:01.175302 3212985 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:56:01.190197 3212985 ssh_runner.go:195] Run: openssl version
	I1217 11:56:01.197107 3212985 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 11:56:01.205490 3212985 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 11:56:01.214061 3212985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 11:56:01.218349 3212985 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 11:56:01.218423 3212985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 11:56:01.260525 3212985 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:56:01.268554 3212985 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:56:01.276397 3212985 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:56:01.284768 3212985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:56:01.289382 3212985 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:56:01.289516 3212985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:56:01.331248 3212985 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:56:01.338774 3212985 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 11:56:01.346651 3212985 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 11:56:01.354564 3212985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 11:56:01.358698 3212985 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 11:56:01.358775 3212985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 11:56:01.400939 3212985 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:56:01.408692 3212985 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:56:01.412548 3212985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 11:56:01.453899 3212985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 11:56:01.495014 3212985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 11:56:01.536150 3212985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 11:56:01.577723 3212985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 11:56:01.619271 3212985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 11:56:01.660657 3212985 kubeadm.go:401] StartCluster: {Name:no-preload-118262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-118262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:56:01.660750 3212985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 11:56:01.660833 3212985 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:56:01.687958 3212985 cri.go:89] found id: ""
	I1217 11:56:01.688081 3212985 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:56:01.696230 3212985 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 11:56:01.696252 3212985 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 11:56:01.696304 3212985 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 11:56:01.704102 3212985 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 11:56:01.704665 3212985 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-118262" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 11:56:01.705100 3212985 kubeconfig.go:62] /home/jenkins/minikube-integration/22182-2922712/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-118262" cluster setting kubeconfig missing "no-preload-118262" context setting]
	I1217 11:56:01.705938 3212985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:01.707388 3212985 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 11:56:01.717641 3212985 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1217 11:56:01.717678 3212985 kubeadm.go:602] duration metric: took 21.41966ms to restartPrimaryControlPlane
	I1217 11:56:01.717689 3212985 kubeadm.go:403] duration metric: took 57.040291ms to StartCluster
	I1217 11:56:01.717705 3212985 settings.go:142] acquiring lock: {Name:mkbe14d68dd5ff3fa1749157c50305d115f5a24d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:01.717769 3212985 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 11:56:01.718373 3212985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:01.718582 3212985 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 11:56:01.718926 3212985 config.go:182] Loaded profile config "no-preload-118262": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:56:01.718998 3212985 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 11:56:01.719125 3212985 addons.go:70] Setting storage-provisioner=true in profile "no-preload-118262"
	I1217 11:56:01.719146 3212985 addons.go:239] Setting addon storage-provisioner=true in "no-preload-118262"
	I1217 11:56:01.719168 3212985 host.go:66] Checking if "no-preload-118262" exists ...
	I1217 11:56:01.719170 3212985 addons.go:70] Setting dashboard=true in profile "no-preload-118262"
	I1217 11:56:01.719228 3212985 addons.go:239] Setting addon dashboard=true in "no-preload-118262"
	W1217 11:56:01.719262 3212985 addons.go:248] addon dashboard should already be in state true
	I1217 11:56:01.719308 3212985 host.go:66] Checking if "no-preload-118262" exists ...
	I1217 11:56:01.719638 3212985 cli_runner.go:164] Run: docker container inspect no-preload-118262 --format={{.State.Status}}
	I1217 11:56:01.719916 3212985 cli_runner.go:164] Run: docker container inspect no-preload-118262 --format={{.State.Status}}
	I1217 11:56:01.720320 3212985 addons.go:70] Setting default-storageclass=true in profile "no-preload-118262"
	I1217 11:56:01.720337 3212985 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-118262"
	I1217 11:56:01.720702 3212985 cli_runner.go:164] Run: docker container inspect no-preload-118262 --format={{.State.Status}}
	I1217 11:56:01.724486 3212985 out.go:179] * Verifying Kubernetes components...
	I1217 11:56:01.727633 3212985 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:56:01.766705 3212985 addons.go:239] Setting addon default-storageclass=true in "no-preload-118262"
	I1217 11:56:01.766751 3212985 host.go:66] Checking if "no-preload-118262" exists ...
	I1217 11:56:01.767177 3212985 cli_runner.go:164] Run: docker container inspect no-preload-118262 --format={{.State.Status}}
	I1217 11:56:01.793928 3212985 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:56:01.799560 3212985 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:56:01.799586 3212985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 11:56:01.799655 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:56:01.806809 3212985 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 11:56:01.806838 3212985 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 11:56:01.806902 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:56:01.809039 3212985 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 11:56:01.812535 3212985 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 11:56:01.817510 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 11:56:01.817535 3212985 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 11:56:01.817604 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:56:01.867768 3212985 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36048 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:56:01.868081 3212985 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36048 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:56:01.868642 3212985 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36048 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:56:01.951610 3212985 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:56:02.024186 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 11:56:02.024265 3212985 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 11:56:02.043227 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 11:56:02.043295 3212985 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 11:56:02.048999 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 11:56:02.054810 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:56:02.086887 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 11:56:02.086960 3212985 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 11:56:02.105255 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 11:56:02.105288 3212985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 11:56:02.121678 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 11:56:02.121719 3212985 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 11:56:02.137737 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 11:56:02.137779 3212985 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 11:56:02.153356 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 11:56:02.153397 3212985 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 11:56:02.168513 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 11:56:02.168557 3212985 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 11:56:02.185798 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 11:56:02.185838 3212985 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 11:56:02.201465 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:02.758705 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:02.758748 3212985 retry.go:31] will retry after 229.540303ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:02.758805 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:02.758819 3212985 retry.go:31] will retry after 199.856736ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:02.759004 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:02.759019 3212985 retry.go:31] will retry after 172.784882ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:02.759090 3212985 node_ready.go:35] waiting up to 6m0s for node "no-preload-118262" to be "Ready" ...
	I1217 11:56:02.932840 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 11:56:02.959390 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:56:02.988844 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:03.015687 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:03.015717 3212985 retry.go:31] will retry after 427.179701ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:03.053926 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:03.053954 3212985 retry.go:31] will retry after 351.36ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:03.071903 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:03.071938 3212985 retry.go:31] will retry after 460.512525ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:03.405971 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:56:03.443451 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:03.475863 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:03.475958 3212985 retry.go:31] will retry after 760.184682ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:03.533075 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:03.533848 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:03.533930 3212985 retry.go:31] will retry after 500.153362ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:03.629508 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:03.629561 3212985 retry.go:31] will retry after 828.549967ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:04.034401 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:04.098672 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:04.098706 3212985 retry.go:31] will retry after 456.814782ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:04.236935 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:04.357588 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:04.357621 3212985 retry.go:31] will retry after 773.010299ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:04.458872 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:04.516437 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:04.516469 3212985 retry.go:31] will retry after 1.201644683s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:04.556582 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:04.622293 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:04.622328 3212985 retry.go:31] will retry after 1.824101164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:04.760127 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:05.131775 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:05.197068 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:05.197101 3212985 retry.go:31] will retry after 718.007742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:05.719362 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:05.829095 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:05.829128 3212985 retry.go:31] will retry after 1.266711526s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:05.915322 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:05.976930 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:05.976963 3212985 retry.go:31] will retry after 983.864547ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:06.446716 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:06.526752 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:06.526789 3212985 retry.go:31] will retry after 1.791049068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:06.962003 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:07.021949 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:07.021981 3212985 retry.go:31] will retry after 3.775428423s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:07.096119 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:07.154813 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:07.154841 3212985 retry.go:31] will retry after 1.6043331s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:07.261035 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:08.318583 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:08.381665 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:08.381708 3212985 retry.go:31] will retry after 3.517495633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:08.759662 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:08.864890 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:08.864925 3212985 retry.go:31] will retry after 2.28260361s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:09.760003 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:10.798319 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:10.860002 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:10.860034 3212985 retry.go:31] will retry after 4.82591476s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:11.148644 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:11.216089 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:11.216123 3212985 retry.go:31] will retry after 6.175133091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:11.900240 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:11.969428 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:11.969471 3212985 retry.go:31] will retry after 2.437731885s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:12.260387 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:14.408207 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:14.471530 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:14.471564 3212985 retry.go:31] will retry after 7.973001246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:14.760396 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:15.686226 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:15.751739 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:15.751823 3212985 retry.go:31] will retry after 4.990913672s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:16.760725 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:17.392069 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:17.450109 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:17.450199 3212985 retry.go:31] will retry after 4.605565076s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:19.260667 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:20.743360 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:20.828411 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:20.828465 3212985 retry.go:31] will retry after 11.110369506s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:21.759604 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:22.056015 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:22.115928 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:22.116008 3212985 retry.go:31] will retry after 10.310245173s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:22.444820 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:22.509039 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:22.509077 3212985 retry.go:31] will retry after 13.279816116s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:23.759723 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:26.259612 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:28.260333 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:30.759694 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:31.939110 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:32.013382 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:32.013417 3212985 retry.go:31] will retry after 17.792843999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:32.426534 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:32.489297 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:32.489332 3212985 retry.go:31] will retry after 10.214719089s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:32.760038 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:35.259670 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:35.789928 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:35.882559 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:35.882592 3212985 retry.go:31] will retry after 9.227629247s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:37.260542 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:39.759863 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:42.259856 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:42.704316 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:42.766302 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:42.766335 3212985 retry.go:31] will retry after 16.793347769s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:44.260613 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:45.111278 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:45.238863 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:45.238978 3212985 retry.go:31] will retry after 27.971446484s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:46.759704 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:48.760327 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:49.806921 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:49.869443 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:49.869476 3212985 retry.go:31] will retry after 26.53119581s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:50.760462 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:53.259758 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:55.759768 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:58.259631 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:59.560400 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:59.620061 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:59.620096 3212985 retry.go:31] will retry after 23.364320547s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:57:00.259931 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:02.761870 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:05.259592 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:07.759629 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:09.760369 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:12.259883 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:57:13.211445 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:57:13.333945 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:57:13.333983 3212985 retry.go:31] will retry after 44.1812533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:57:14.260040 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:16.260864 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:57:16.401385 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:57:16.464135 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:57:16.464167 3212985 retry.go:31] will retry after 45.341892172s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:57:18.759728 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:20.760439 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:57:22.985044 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:57:23.107103 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:57:23.107213 3212985 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1217 11:57:23.259704 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:25.759680 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:28.259589 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:30.759848 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:33.259631 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:35.259696 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:37.759650 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:39.759939 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:42.259840 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:44.759690 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:46.760542 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:49.259755 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:51.259894 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:53.259968 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:55.760675 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:57:57.516068 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:57:57.626784 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:57:57.626882 3212985 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1217 11:57:58.259617 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:00.259717 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:58:01.806452 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:58:01.869925 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:58:01.870041 3212985 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 11:58:01.873020 3212985 out.go:179] * Enabled addons: 
	I1217 11:58:01.875902 3212985 addons.go:530] duration metric: took 2m0.156897144s for enable addons: enabled=[]
	W1217 11:58:02.759596 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:04.759668 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:07.259570 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:09.259781 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:11.759720 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:14.259689 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:16.759733 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:19.259603 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:21.259694 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:23.759673 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:26.259638 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:28.259781 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:30.759896 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:33.259742 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:35.759679 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:38.259699 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:40.759816 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:43.259680 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:45.260027 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:47.759590 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:58:53.028972 3204903 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001753064s
	I1217 11:58:53.029259 3204903 kubeadm.go:319] 
	I1217 11:58:53.029324 3204903 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 11:58:53.029359 3204903 kubeadm.go:319] 	- The kubelet is not running
	I1217 11:58:53.029464 3204903 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 11:58:53.029468 3204903 kubeadm.go:319] 
	I1217 11:58:53.029572 3204903 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 11:58:53.029604 3204903 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 11:58:53.029645 3204903 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 11:58:53.029650 3204903 kubeadm.go:319] 
	I1217 11:58:53.035722 3204903 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 11:58:53.036145 3204903 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 11:58:53.036254 3204903 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:58:53.036508 3204903 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 11:58:53.036516 3204903 kubeadm.go:319] 
	I1217 11:58:53.036585 3204903 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 11:58:53.036636 3204903 kubeadm.go:403] duration metric: took 8m6.348588119s to StartCluster
	I1217 11:58:53.036680 3204903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:58:53.036746 3204903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:58:53.092234 3204903 cri.go:89] found id: ""
	I1217 11:58:53.092255 3204903 logs.go:282] 0 containers: []
	W1217 11:58:53.092264 3204903 logs.go:284] No container was found matching "kube-apiserver"
	I1217 11:58:53.092270 3204903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:58:53.092329 3204903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:58:53.120381 3204903 cri.go:89] found id: ""
	I1217 11:58:53.120404 3204903 logs.go:282] 0 containers: []
	W1217 11:58:53.120412 3204903 logs.go:284] No container was found matching "etcd"
	I1217 11:58:53.120440 3204903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:58:53.120504 3204903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:58:53.150913 3204903 cri.go:89] found id: ""
	I1217 11:58:53.150935 3204903 logs.go:282] 0 containers: []
	W1217 11:58:53.150943 3204903 logs.go:284] No container was found matching "coredns"
	I1217 11:58:53.150949 3204903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:58:53.151010 3204903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:58:53.177002 3204903 cri.go:89] found id: ""
	I1217 11:58:53.177028 3204903 logs.go:282] 0 containers: []
	W1217 11:58:53.177037 3204903 logs.go:284] No container was found matching "kube-scheduler"
	I1217 11:58:53.177044 3204903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:58:53.177105 3204903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:58:53.202075 3204903 cri.go:89] found id: ""
	I1217 11:58:53.202101 3204903 logs.go:282] 0 containers: []
	W1217 11:58:53.202109 3204903 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:58:53.202116 3204903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:58:53.202175 3204903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:58:53.230674 3204903 cri.go:89] found id: ""
	I1217 11:58:53.230701 3204903 logs.go:282] 0 containers: []
	W1217 11:58:53.230709 3204903 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 11:58:53.230716 3204903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:58:53.230773 3204903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:58:53.256007 3204903 cri.go:89] found id: ""
	I1217 11:58:53.256034 3204903 logs.go:282] 0 containers: []
	W1217 11:58:53.256042 3204903 logs.go:284] No container was found matching "kindnet"
	I1217 11:58:53.256053 3204903 logs.go:123] Gathering logs for kubelet ...
	I1217 11:58:53.256065 3204903 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:58:53.314487 3204903 logs.go:123] Gathering logs for dmesg ...
	I1217 11:58:53.314524 3204903 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:58:53.331203 3204903 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:58:53.331240 3204903 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:58:53.399250 3204903 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 11:58:53.390312    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:53.390861    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:53.392589    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:53.393116    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:53.394713    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 11:58:53.390312    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:53.390861    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:53.392589    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:53.393116    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:53.394713    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:58:53.399274 3204903 logs.go:123] Gathering logs for containerd ...
	I1217 11:58:53.399288 3204903 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:58:53.439803 3204903 logs.go:123] Gathering logs for container status ...
	I1217 11:58:53.439840 3204903 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 11:58:53.468929 3204903 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001753064s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 11:58:53.469033 3204903 out.go:285] * 
	W1217 11:58:53.469130 3204903 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001753064s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 11:58:53.469177 3204903 out.go:285] * 
	W1217 11:58:53.471457 3204903 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:58:53.476865 3204903 out.go:203] 
	W1217 11:58:53.479927 3204903 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001753064s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 11:58:53.479964 3204903 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 11:58:53.479988 3204903 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 11:58:53.483089 3204903 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.755552932Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.755645459Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.755785534Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.755875928Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.755955130Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.756051382Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.756130872Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.756213471Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.756303471Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.756462466Z" level=info msg="Connect containerd service"
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.756851273Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.757629330Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.779637015Z" level=info msg="Start subscribing containerd event"
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.779721591Z" level=info msg="Start recovering state"
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.784722605Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.784946994Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.838108706Z" level=info msg="Start event monitor"
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.838307741Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.838371346Z" level=info msg="Start streaming server"
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.838434852Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.838493632Z" level=info msg="runtime interface starting up..."
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.838556605Z" level=info msg="starting plugins..."
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.838618462Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 11:50:44 newest-cni-669680 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.840932253Z" level=info msg="containerd successfully booted in 0.112312s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 11:58:54.552301    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:54.553050    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:54.568963    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:54.569621    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:54.571347    4983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +26.532481] overlayfs: idmapped layers are currently not supported
	[Dec17 09:26] overlayfs: idmapped layers are currently not supported
	[Dec17 09:27] overlayfs: idmapped layers are currently not supported
	[Dec17 09:29] overlayfs: idmapped layers are currently not supported
	[Dec17 09:31] overlayfs: idmapped layers are currently not supported
	[Dec17 09:41] overlayfs: idmapped layers are currently not supported
	[Dec17 09:43] overlayfs: idmapped layers are currently not supported
	[Dec17 09:44] overlayfs: idmapped layers are currently not supported
	[  +5.066669] overlayfs: idmapped layers are currently not supported
	[ +38.827173] overlayfs: idmapped layers are currently not supported
	[Dec17 09:45] overlayfs: idmapped layers are currently not supported
	[Dec17 09:46] overlayfs: idmapped layers are currently not supported
	[Dec17 09:48] overlayfs: idmapped layers are currently not supported
	[  +5.468161] overlayfs: idmapped layers are currently not supported
	[Dec17 09:49] overlayfs: idmapped layers are currently not supported
	[  +4.263444] overlayfs: idmapped layers are currently not supported
	[Dec17 09:50] overlayfs: idmapped layers are currently not supported
	[Dec17 10:07] overlayfs: idmapped layers are currently not supported
	[Dec17 10:08] overlayfs: idmapped layers are currently not supported
	[Dec17 10:10] overlayfs: idmapped layers are currently not supported
	[Dec17 10:11] overlayfs: idmapped layers are currently not supported
	[Dec17 10:13] overlayfs: idmapped layers are currently not supported
	[Dec17 10:15] overlayfs: idmapped layers are currently not supported
	[Dec17 10:16] overlayfs: idmapped layers are currently not supported
	[Dec17 10:21] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 11:58:54 up 17:41,  0 user,  load average: 0.49, 0.63, 1.36
	Linux newest-cni-669680 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 11:58:51 newest-cni-669680 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:58:52 newest-cni-669680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 17 11:58:52 newest-cni-669680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:58:52 newest-cni-669680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:58:52 newest-cni-669680 kubelet[4791]: E1217 11:58:52.312619    4791 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:58:52 newest-cni-669680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:58:52 newest-cni-669680 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:58:52 newest-cni-669680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 17 11:58:52 newest-cni-669680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:58:53 newest-cni-669680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:58:53 newest-cni-669680 kubelet[4796]: E1217 11:58:53.084042    4796 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:58:53 newest-cni-669680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:58:53 newest-cni-669680 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:58:53 newest-cni-669680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 17 11:58:53 newest-cni-669680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:58:53 newest-cni-669680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:58:53 newest-cni-669680 kubelet[4890]: E1217 11:58:53.842523    4890 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:58:53 newest-cni-669680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:58:53 newest-cni-669680 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:58:54 newest-cni-669680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 17 11:58:54 newest-cni-669680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:58:54 newest-cni-669680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:58:54 newest-cni-669680 kubelet[4987]: E1217 11:58:54.583683    4987 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:58:54 newest-cni-669680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:58:54 newest-cni-669680 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-669680 -n newest-cni-669680
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-669680 -n newest-cni-669680: exit status 6 (343.769093ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 11:58:55.070710 3217480 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-669680" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-669680" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (501.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-118262 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-118262 create -f testdata/busybox.yaml: exit status 1 (54.944004ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-118262" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-118262 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-118262
helpers_test.go:244: (dbg) docker inspect no-preload-118262:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362",
	        "Created": "2025-12-17T11:45:23.889791979Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3184585,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:45:23.975335333Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/hostname",
	        "HostsPath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/hosts",
	        "LogPath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362-json.log",
	        "Name": "/no-preload-118262",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-118262:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-118262",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362",
	                "LowerDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82-init/diff:/var/lib/docker/overlay2/aa1c3cb837db05afa9c265c464cc269fa9c11658f422c1c8858e1287ac952f12/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-118262",
	                "Source": "/var/lib/docker/volumes/no-preload-118262/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-118262",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-118262",
	                "name.minikube.sigs.k8s.io": "no-preload-118262",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "edfd8c84516eb23c0ad2b26b7726367c3e837ddca981000c80312ea31fd9a26a",
	            "SandboxKey": "/var/run/docker/netns/edfd8c84516e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36018"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36019"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36022"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36020"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36021"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-118262": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:56:4e:97:d8:37",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3227851744df2bdac9c367dc789ddfe2892f877b7b9b947cdcd81cb2897c4ba1",
	                    "EndpointID": "b3f5ff720ab2b961fc2a2904ac219198576784cb510a37f2350f10bf17783082",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-118262",
	                        "4578079103f7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-118262 -n no-preload-118262
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-118262 -n no-preload-118262: exit status 6 (324.008578ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 11:53:57.063989 3209918 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-118262" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-118262 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ start   │ -p no-preload-118262 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:45 UTC │                     │
	│ start   │ -p cert-expiration-182607 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                          │ cert-expiration-182607       │ jenkins │ v1.37.0 │ 17 Dec 25 11:45 UTC │ 17 Dec 25 11:45 UTC │
	│ delete  │ -p cert-expiration-182607                                                                                                                                                                                                                                │ cert-expiration-182607       │ jenkins │ v1.37.0 │ 17 Dec 25 11:45 UTC │ 17 Dec 25 11:45 UTC │
	│ start   │ -p embed-certs-628462 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:45 UTC │ 17 Dec 25 11:46 UTC │
	│ addons  │ enable metrics-server -p embed-certs-628462 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                 │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:46 UTC │ 17 Dec 25 11:46 UTC │
	│ stop    │ -p embed-certs-628462 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:46 UTC │ 17 Dec 25 11:47 UTC │
	│ addons  │ enable dashboard -p embed-certs-628462 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │ 17 Dec 25 11:47 UTC │
	│ start   │ -p embed-certs-628462 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │ 17 Dec 25 11:47 UTC │
	│ image   │ embed-certs-628462 image list --format=json                                                                                                                                                                                                              │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ pause   │ -p embed-certs-628462 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ unpause │ -p embed-certs-628462 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p embed-certs-628462                                                                                                                                                                                                                                    │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p embed-certs-628462                                                                                                                                                                                                                                    │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p disable-driver-mounts-003095                                                                                                                                                                                                                          │ disable-driver-mounts-003095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ start   │ -p default-k8s-diff-port-224095 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:49 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-224095 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ stop    │ -p default-k8s-diff-port-224095 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-224095 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ start   │ -p default-k8s-diff-port-224095 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:50 UTC │
	│ image   │ default-k8s-diff-port-224095 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ pause   │ -p default-k8s-diff-port-224095 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ unpause │ -p default-k8s-diff-port-224095 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ delete  │ -p default-k8s-diff-port-224095                                                                                                                                                                                                                          │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ delete  │ -p default-k8s-diff-port-224095                                                                                                                                                                                                                          │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ start   │ -p newest-cni-669680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:50:33
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:50:33.770675 3204903 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:50:33.770892 3204903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:50:33.770930 3204903 out.go:374] Setting ErrFile to fd 2...
	I1217 11:50:33.770950 3204903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:50:33.771242 3204903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 11:50:33.771720 3204903 out.go:368] Setting JSON to false
	I1217 11:50:33.772826 3204903 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":63184,"bootTime":1765909050,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 11:50:33.772934 3204903 start.go:143] virtualization:  
	I1217 11:50:33.777422 3204903 out.go:179] * [newest-cni-669680] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 11:50:33.781147 3204903 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 11:50:33.781258 3204903 notify.go:221] Checking for updates...
	I1217 11:50:33.787770 3204903 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:50:33.790969 3204903 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 11:50:33.794108 3204903 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 11:50:33.797396 3204903 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 11:50:33.800914 3204903 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:50:33.804694 3204903 config.go:182] Loaded profile config "no-preload-118262": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:50:33.804819 3204903 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:50:33.836693 3204903 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 11:50:33.836824 3204903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:50:33.905198 3204903 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:50:33.886446399 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:50:33.905310 3204903 docker.go:319] overlay module found
	I1217 11:50:33.908522 3204903 out.go:179] * Using the docker driver based on user configuration
	I1217 11:50:33.911483 3204903 start.go:309] selected driver: docker
	I1217 11:50:33.911512 3204903 start.go:927] validating driver "docker" against <nil>
	I1217 11:50:33.911528 3204903 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:50:33.912303 3204903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:50:33.968344 3204903 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:50:33.958386366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:50:33.968600 3204903 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1217 11:50:33.968643 3204903 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1217 11:50:33.968883 3204903 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 11:50:33.971848 3204903 out.go:179] * Using Docker driver with root privileges
	I1217 11:50:33.974707 3204903 cni.go:84] Creating CNI manager for ""
	I1217 11:50:33.974785 3204903 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 11:50:33.974803 3204903 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 11:50:33.974912 3204903 start.go:353] cluster config:
	{Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:50:33.980004 3204903 out.go:179] * Starting "newest-cni-669680" primary control-plane node in "newest-cni-669680" cluster
	I1217 11:50:33.982917 3204903 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 11:50:33.985952 3204903 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:50:33.988892 3204903 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 11:50:33.988945 3204903 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1217 11:50:33.988990 3204903 cache.go:65] Caching tarball of preloaded images
	I1217 11:50:33.989015 3204903 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:50:33.989111 3204903 preload.go:238] Found /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 11:50:33.989123 3204903 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1217 11:50:33.989239 3204903 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/config.json ...
	I1217 11:50:33.989268 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/config.json: {Name:mk0a64d844d14a82596feb52de4f9f10fa21ee9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:34.014470 3204903 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:50:34.014499 3204903 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 11:50:34.014516 3204903 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:50:34.014550 3204903 start.go:360] acquireMachinesLock for newest-cni-669680: {Name:mk48c8383b245a4b70f2208fe2e76b80693bbb09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:50:34.014670 3204903 start.go:364] duration metric: took 97.672µs to acquireMachinesLock for "newest-cni-669680"
	I1217 11:50:34.014703 3204903 start.go:93] Provisioning new machine with config: &{Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 11:50:34.014791 3204903 start.go:125] createHost starting for "" (driver="docker")
	I1217 11:50:34.018329 3204903 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 11:50:34.018591 3204903 start.go:159] libmachine.API.Create for "newest-cni-669680" (driver="docker")
	I1217 11:50:34.018632 3204903 client.go:173] LocalClient.Create starting
	I1217 11:50:34.018712 3204903 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem
	I1217 11:50:34.018752 3204903 main.go:143] libmachine: Decoding PEM data...
	I1217 11:50:34.018777 3204903 main.go:143] libmachine: Parsing certificate...
	I1217 11:50:34.018837 3204903 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem
	I1217 11:50:34.018864 3204903 main.go:143] libmachine: Decoding PEM data...
	I1217 11:50:34.018877 3204903 main.go:143] libmachine: Parsing certificate...
	I1217 11:50:34.019266 3204903 cli_runner.go:164] Run: docker network inspect newest-cni-669680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 11:50:34.036819 3204903 cli_runner.go:211] docker network inspect newest-cni-669680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 11:50:34.036904 3204903 network_create.go:284] running [docker network inspect newest-cni-669680] to gather additional debugging logs...
	I1217 11:50:34.036925 3204903 cli_runner.go:164] Run: docker network inspect newest-cni-669680
	W1217 11:50:34.053220 3204903 cli_runner.go:211] docker network inspect newest-cni-669680 returned with exit code 1
	I1217 11:50:34.053254 3204903 network_create.go:287] error running [docker network inspect newest-cni-669680]: docker network inspect newest-cni-669680: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-669680 not found
	I1217 11:50:34.053268 3204903 network_create.go:289] output of [docker network inspect newest-cni-669680]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-669680 not found
	
	** /stderr **
	I1217 11:50:34.053385 3204903 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:50:34.073660 3204903 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f429477a79c4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:ea:a9:f2:52:01} reservation:<nil>}
	I1217 11:50:34.074039 3204903 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e0545776686c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:70:9e:49:ed:7d} reservation:<nil>}
	I1217 11:50:34.074407 3204903 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-279becfad84b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:b7:62:6e:a9:ee} reservation:<nil>}
	I1217 11:50:34.074906 3204903 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a004b0}
	I1217 11:50:34.074944 3204903 network_create.go:124] attempt to create docker network newest-cni-669680 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1217 11:50:34.075027 3204903 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-669680 newest-cni-669680
	I1217 11:50:34.133594 3204903 network_create.go:108] docker network newest-cni-669680 192.168.76.0/24 created
	I1217 11:50:34.133624 3204903 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-669680" container
	I1217 11:50:34.133717 3204903 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 11:50:34.150255 3204903 cli_runner.go:164] Run: docker volume create newest-cni-669680 --label name.minikube.sigs.k8s.io=newest-cni-669680 --label created_by.minikube.sigs.k8s.io=true
	I1217 11:50:34.168619 3204903 oci.go:103] Successfully created a docker volume newest-cni-669680
	I1217 11:50:34.168718 3204903 cli_runner.go:164] Run: docker run --rm --name newest-cni-669680-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-669680 --entrypoint /usr/bin/test -v newest-cni-669680:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 11:50:34.733678 3204903 oci.go:107] Successfully prepared a docker volume newest-cni-669680
	I1217 11:50:34.733775 3204903 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 11:50:34.733794 3204903 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 11:50:34.733863 3204903 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-669680:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 11:50:38.835811 3204903 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-669680:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.101893526s)
	I1217 11:50:38.835847 3204903 kic.go:203] duration metric: took 4.10204956s to extract preloaded images to volume ...
	W1217 11:50:38.835990 3204903 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 11:50:38.836106 3204903 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 11:50:38.889030 3204903 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-669680 --name newest-cni-669680 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-669680 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-669680 --network newest-cni-669680 --ip 192.168.76.2 --volume newest-cni-669680:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 11:50:39.198138 3204903 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Running}}
	I1217 11:50:39.220630 3204903 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 11:50:39.245099 3204903 cli_runner.go:164] Run: docker exec newest-cni-669680 stat /var/lib/dpkg/alternatives/iptables
	I1217 11:50:39.296762 3204903 oci.go:144] the created container "newest-cni-669680" has a running status.
	I1217 11:50:39.296821 3204903 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519...
	I1217 11:50:39.301246 3204903 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519.pub --> /home/docker/.ssh/authorized_keys (81 bytes)
	I1217 11:50:39.326812 3204903 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 11:50:39.353133 3204903 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 11:50:39.353152 3204903 kic_runner.go:114] Args: [docker exec --privileged newest-cni-669680 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 11:50:39.407725 3204903 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 11:50:39.427651 3204903 machine.go:94] provisionDockerMachine start ...
	I1217 11:50:39.427814 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:39.449037 3204903 main.go:143] libmachine: Using SSH client type: native
	I1217 11:50:39.449153 3204903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36043 <nil> <nil>}
	I1217 11:50:39.449161 3204903 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:50:39.449689 3204903 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46730->127.0.0.1:36043: read: connection reset by peer
	I1217 11:50:42.588075 3204903 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-669680
	
	I1217 11:50:42.588101 3204903 ubuntu.go:182] provisioning hostname "newest-cni-669680"
	I1217 11:50:42.588181 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:42.610896 3204903 main.go:143] libmachine: Using SSH client type: native
	I1217 11:50:42.611004 3204903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36043 <nil> <nil>}
	I1217 11:50:42.611019 3204903 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-669680 && echo "newest-cni-669680" | sudo tee /etc/hostname
	I1217 11:50:42.758221 3204903 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-669680
	
	I1217 11:50:42.758323 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:42.776901 3204903 main.go:143] libmachine: Using SSH client type: native
	I1217 11:50:42.777030 3204903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36043 <nil> <nil>}
	I1217 11:50:42.777054 3204903 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-669680' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-669680/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-669680' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:50:42.909042 3204903 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:50:42.909069 3204903 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 11:50:42.909089 3204903 ubuntu.go:190] setting up certificates
	I1217 11:50:42.909098 3204903 provision.go:84] configureAuth start
	I1217 11:50:42.909162 3204903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 11:50:42.927260 3204903 provision.go:143] copyHostCerts
	I1217 11:50:42.927326 3204903 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 11:50:42.927335 3204903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 11:50:42.927414 3204903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 11:50:42.927515 3204903 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 11:50:42.927521 3204903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 11:50:42.927546 3204903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 11:50:42.927611 3204903 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 11:50:42.927615 3204903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 11:50:42.927639 3204903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 11:50:42.927694 3204903 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.newest-cni-669680 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-669680]
	I1217 11:50:43.131974 3204903 provision.go:177] copyRemoteCerts
	I1217 11:50:43.132056 3204903 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:50:43.132097 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:43.150139 3204903 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36043 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 11:50:43.244232 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 11:50:43.264226 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 11:50:43.288821 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 11:50:43.309853 3204903 provision.go:87] duration metric: took 400.734271ms to configureAuth
	I1217 11:50:43.309977 3204903 ubuntu.go:206] setting minikube options for container-runtime
	I1217 11:50:43.310242 3204903 config.go:182] Loaded profile config "newest-cni-669680": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:50:43.310275 3204903 machine.go:97] duration metric: took 3.882542746s to provisionDockerMachine
	I1217 11:50:43.310297 3204903 client.go:176] duration metric: took 9.291651647s to LocalClient.Create
	I1217 11:50:43.310348 3204903 start.go:167] duration metric: took 9.291744428s to libmachine.API.Create "newest-cni-669680"
	I1217 11:50:43.310376 3204903 start.go:293] postStartSetup for "newest-cni-669680" (driver="docker")
	I1217 11:50:43.310413 3204903 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 11:50:43.310526 3204903 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 11:50:43.310608 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:43.332854 3204903 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36043 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 11:50:43.428662 3204903 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 11:50:43.432294 3204903 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 11:50:43.432325 3204903 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 11:50:43.432337 3204903 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 11:50:43.432397 3204903 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 11:50:43.432547 3204903 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 11:50:43.432651 3204903 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 11:50:43.440004 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 11:50:43.457337 3204903 start.go:296] duration metric: took 146.930324ms for postStartSetup
	I1217 11:50:43.457705 3204903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 11:50:43.474473 3204903 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/config.json ...
	I1217 11:50:43.474760 3204903 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:50:43.474809 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:43.491508 3204903 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36043 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 11:50:43.585952 3204903 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 11:50:43.591149 3204903 start.go:128] duration metric: took 9.576343313s to createHost
	I1217 11:50:43.591175 3204903 start.go:83] releasing machines lock for "newest-cni-669680", held for 9.576490895s
	I1217 11:50:43.591260 3204903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 11:50:43.609275 3204903 ssh_runner.go:195] Run: cat /version.json
	I1217 11:50:43.609319 3204903 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 11:50:43.609330 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:43.609377 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:43.631237 3204903 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36043 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 11:50:43.636805 3204903 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36043 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 11:50:43.724501 3204903 ssh_runner.go:195] Run: systemctl --version
	I1217 11:50:43.819531 3204903 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 11:50:43.823938 3204903 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 11:50:43.824018 3204903 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 11:50:43.852205 3204903 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 11:50:43.852281 3204903 start.go:496] detecting cgroup driver to use...
	I1217 11:50:43.852330 3204903 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 11:50:43.852407 3204903 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 11:50:43.867801 3204903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 11:50:43.881415 3204903 docker.go:218] disabling cri-docker service (if available) ...
	I1217 11:50:43.881505 3204903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 11:50:43.898869 3204903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 11:50:43.917331 3204903 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 11:50:44.042660 3204903 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 11:50:44.176397 3204903 docker.go:234] disabling docker service ...
	I1217 11:50:44.176490 3204903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 11:50:44.197465 3204903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 11:50:44.211041 3204903 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 11:50:44.324043 3204903 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 11:50:44.437310 3204903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 11:50:44.451253 3204903 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 11:50:44.468227 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 11:50:44.477660 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 11:50:44.487940 3204903 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 11:50:44.488046 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 11:50:44.497581 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 11:50:44.506638 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 11:50:44.516061 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 11:50:44.524921 3204903 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 11:50:44.533457 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 11:50:44.542606 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 11:50:44.551989 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 11:50:44.561578 3204903 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 11:50:44.570051 3204903 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 11:50:44.577822 3204903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:50:44.688867 3204903 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 11:50:44.840667 3204903 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 11:50:44.840788 3204903 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 11:50:44.845363 3204903 start.go:564] Will wait 60s for crictl version
	I1217 11:50:44.845485 3204903 ssh_runner.go:195] Run: which crictl
	I1217 11:50:44.849376 3204903 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 11:50:44.883387 3204903 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 11:50:44.883511 3204903 ssh_runner.go:195] Run: containerd --version
	I1217 11:50:44.905807 3204903 ssh_runner.go:195] Run: containerd --version
	I1217 11:50:44.930438 3204903 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1217 11:50:44.933446 3204903 cli_runner.go:164] Run: docker network inspect newest-cni-669680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:50:44.950378 3204903 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 11:50:44.954519 3204903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:50:44.968645 3204903 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 11:50:44.971593 3204903 kubeadm.go:884] updating cluster {Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:50:44.971744 3204903 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 11:50:44.971843 3204903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:50:45.011583 3204903 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 11:50:45.011607 3204903 containerd.go:534] Images already preloaded, skipping extraction
	I1217 11:50:45.011729 3204903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:50:45.074368 3204903 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 11:50:45.074393 3204903 cache_images.go:86] Images are preloaded, skipping loading
	I1217 11:50:45.074401 3204903 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1217 11:50:45.074511 3204903 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-669680 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 11:50:45.074583 3204903 ssh_runner.go:195] Run: sudo crictl info
	I1217 11:50:45.124774 3204903 cni.go:84] Creating CNI manager for ""
	I1217 11:50:45.124803 3204903 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 11:50:45.124823 3204903 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 11:50:45.124848 3204903 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-669680 NodeName:newest-cni-669680 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:50:45.125086 3204903 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-669680"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:50:45.125178 3204903 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 11:50:45.136963 3204903 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 11:50:45.137076 3204903 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:50:45.146865 3204903 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1217 11:50:45.164693 3204903 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 11:50:45.183943 3204903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1217 11:50:45.201780 3204903 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:50:45.209411 3204903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:50:45.227993 3204903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:50:45.376186 3204903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:50:45.396221 3204903 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680 for IP: 192.168.76.2
	I1217 11:50:45.396257 3204903 certs.go:195] generating shared ca certs ...
	I1217 11:50:45.396275 3204903 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:45.396432 3204903 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 11:50:45.396497 3204903 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 11:50:45.396511 3204903 certs.go:257] generating profile certs ...
	I1217 11:50:45.396576 3204903 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.key
	I1217 11:50:45.396594 3204903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.crt with IP's: []
	I1217 11:50:45.498992 3204903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.crt ...
	I1217 11:50:45.499023 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.crt: {Name:mkfb66bec095c72b7c1a0e563529baf2180c300c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:45.499228 3204903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.key ...
	I1217 11:50:45.499243 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.key: {Name:mk7292acf4e53dd5012d44cc923a43c80ae9a7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:45.499340 3204903 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key.e7646161
	I1217 11:50:45.499360 3204903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt.e7646161 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1217 11:50:45.885492 3204903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt.e7646161 ...
	I1217 11:50:45.885525 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt.e7646161: {Name:mkc2aab84e543777fe00770e300fac9f47cd579f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:45.885732 3204903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key.e7646161 ...
	I1217 11:50:45.885749 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key.e7646161: {Name:mk25ae271c13c745dd8ef046c320963d505be1ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:45.885837 3204903 certs.go:382] copying /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt.e7646161 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt
	I1217 11:50:45.885921 3204903 certs.go:386] copying /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key.e7646161 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key
	I1217 11:50:45.885986 3204903 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key
	I1217 11:50:45.886007 3204903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.crt with IP's: []
	I1217 11:50:46.187502 3204903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.crt ...
	I1217 11:50:46.187541 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.crt: {Name:mk12f9e3a4ac82afa8ef3e938731ab0419f581a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:46.187741 3204903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key ...
	I1217 11:50:46.187756 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key: {Name:mk258e31e31368b8ae182e758b28fd15f98dabb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:46.187958 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 11:50:46.188008 3204903 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 11:50:46.188031 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:50:46.188065 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 11:50:46.188095 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:50:46.188125 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 11:50:46.188174 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 11:50:46.188855 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:50:46.209179 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:50:46.229625 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:50:46.248348 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 11:50:46.280053 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 11:50:46.299540 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 11:50:46.331420 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:50:46.354812 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 11:50:46.379741 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:50:46.398349 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 11:50:46.416502 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 11:50:46.434656 3204903 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:50:46.448045 3204903 ssh_runner.go:195] Run: openssl version
	I1217 11:50:46.454404 3204903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:50:46.462383 3204903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:50:46.470220 3204903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:50:46.474117 3204903 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:50:46.474205 3204903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:50:46.515776 3204903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:50:46.523521 3204903 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 11:50:46.531167 3204903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 11:50:46.538808 3204903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 11:50:46.546526 3204903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 11:50:46.550351 3204903 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 11:50:46.550420 3204903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 11:50:46.591537 3204903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:50:46.599582 3204903 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2924574.pem /etc/ssl/certs/51391683.0
	I1217 11:50:46.607230 3204903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 11:50:46.615038 3204903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 11:50:46.623144 3204903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 11:50:46.627219 3204903 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 11:50:46.627293 3204903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 11:50:46.668548 3204903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:50:46.676480 3204903 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/29245742.pem /etc/ssl/certs/3ec20f2e.0
	I1217 11:50:46.684254 3204903 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:50:46.687989 3204903 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 11:50:46.688053 3204903 kubeadm.go:401] StartCluster: {Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:50:46.688186 3204903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 11:50:46.688251 3204903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:50:46.715504 3204903 cri.go:89] found id: ""
	I1217 11:50:46.715577 3204903 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:50:46.723636 3204903 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 11:50:46.731913 3204903 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 11:50:46.732013 3204903 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:50:46.740391 3204903 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 11:50:46.740486 3204903 kubeadm.go:158] found existing configuration files:
	
	I1217 11:50:46.740554 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 11:50:46.748658 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 11:50:46.748734 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 11:50:46.756251 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 11:50:46.764744 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 11:50:46.764812 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 11:50:46.772495 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 11:50:46.780304 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 11:50:46.780374 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:50:46.787858 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 11:50:46.795827 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 11:50:46.795920 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:50:46.803940 3204903 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 11:50:46.842300 3204903 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 11:50:46.842364 3204903 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 11:50:46.914982 3204903 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 11:50:46.915066 3204903 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 11:50:46.915120 3204903 kubeadm.go:319] OS: Linux
	I1217 11:50:46.915224 3204903 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 11:50:46.915306 3204903 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 11:50:46.915380 3204903 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 11:50:46.915458 3204903 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 11:50:46.915534 3204903 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 11:50:46.915612 3204903 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 11:50:46.915688 3204903 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 11:50:46.915760 3204903 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 11:50:46.915833 3204903 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 11:50:46.991927 3204903 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 11:50:46.992117 3204903 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 11:50:46.992264 3204903 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 11:50:47.011559 3204903 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 11:50:47.018032 3204903 out.go:252]   - Generating certificates and keys ...
	I1217 11:50:47.018195 3204903 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 11:50:47.018301 3204903 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 11:50:47.129470 3204903 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 11:50:47.445618 3204903 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 11:50:47.915158 3204903 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 11:50:48.499656 3204903 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 11:50:48.596834 3204903 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 11:50:48.597124 3204903 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-669680] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 11:50:48.753661 3204903 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 11:50:48.754010 3204903 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-669680] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 11:50:48.982189 3204903 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 11:50:49.176711 3204903 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 11:50:49.329925 3204903 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 11:50:49.330545 3204903 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 11:50:49.669219 3204903 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 11:50:49.769896 3204903 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 11:50:50.134620 3204903 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 11:50:50.518232 3204903 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 11:50:51.159536 3204903 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 11:50:51.160438 3204903 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 11:50:51.163380 3204903 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 11:50:51.167113 3204903 out.go:252]   - Booting up control plane ...
	I1217 11:50:51.167275 3204903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 11:50:51.167359 3204903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 11:50:51.168888 3204903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 11:50:51.187617 3204903 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 11:50:51.187958 3204903 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 11:50:51.195573 3204903 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 11:50:51.195900 3204903 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 11:50:51.195946 3204903 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 11:50:51.332866 3204903 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 11:50:51.332987 3204903 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 11:53:54.678520 3184285 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000061049s
	I1217 11:53:54.678562 3184285 kubeadm.go:319] 
	I1217 11:53:54.678668 3184285 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 11:53:54.678735 3184285 kubeadm.go:319] 	- The kubelet is not running
	I1217 11:53:54.679061 3184285 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 11:53:54.679078 3184285 kubeadm.go:319] 
	I1217 11:53:54.679259 3184285 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 11:53:54.679319 3184285 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 11:53:54.679610 3184285 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 11:53:54.679619 3184285 kubeadm.go:319] 
	I1217 11:53:54.684331 3184285 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 11:53:54.684946 3184285 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 11:53:54.685447 3184285 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:53:54.685728 3184285 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 11:53:54.685741 3184285 kubeadm.go:319] 
	I1217 11:53:54.685819 3184285 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 11:53:54.685877 3184285 kubeadm.go:403] duration metric: took 8m8.248541569s to StartCluster
	I1217 11:53:54.685915 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:53:54.685995 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:53:54.712731 3184285 cri.go:89] found id: ""
	I1217 11:53:54.712767 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.712778 3184285 logs.go:284] No container was found matching "kube-apiserver"
	I1217 11:53:54.712784 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:53:54.712847 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:53:54.738074 3184285 cri.go:89] found id: ""
	I1217 11:53:54.738101 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.738110 3184285 logs.go:284] No container was found matching "etcd"
	I1217 11:53:54.738116 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:53:54.738176 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:53:54.763115 3184285 cri.go:89] found id: ""
	I1217 11:53:54.763142 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.763151 3184285 logs.go:284] No container was found matching "coredns"
	I1217 11:53:54.763160 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:53:54.763223 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:53:54.788613 3184285 cri.go:89] found id: ""
	I1217 11:53:54.788637 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.788646 3184285 logs.go:284] No container was found matching "kube-scheduler"
	I1217 11:53:54.788652 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:53:54.788710 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:53:54.814171 3184285 cri.go:89] found id: ""
	I1217 11:53:54.814207 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.814216 3184285 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:53:54.814222 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:53:54.814287 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:53:54.839339 3184285 cri.go:89] found id: ""
	I1217 11:53:54.839362 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.839370 3184285 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 11:53:54.839376 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:53:54.839434 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:53:54.866460 3184285 cri.go:89] found id: ""
	I1217 11:53:54.866486 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.866495 3184285 logs.go:284] No container was found matching "kindnet"
	I1217 11:53:54.866505 3184285 logs.go:123] Gathering logs for container status ...
	I1217 11:53:54.866516 3184285 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:53:54.897933 3184285 logs.go:123] Gathering logs for kubelet ...
	I1217 11:53:54.897961 3184285 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:53:54.955505 3184285 logs.go:123] Gathering logs for dmesg ...
	I1217 11:53:54.955540 3184285 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:53:54.972937 3184285 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:53:54.972967 3184285 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:53:55.055017 3184285 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 11:53:55.043684    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.046958    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.047779    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.049563    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.050103    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 11:53:55.043684    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.046958    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.047779    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.049563    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.050103    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:53:55.055059 3184285 logs.go:123] Gathering logs for containerd ...
	I1217 11:53:55.055072 3184285 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1217 11:53:55.106009 3184285 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061049s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 11:53:55.106084 3184285 out.go:285] * 
	W1217 11:53:55.106176 3184285 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061049s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 11:53:55.106224 3184285 out.go:285] * 
	W1217 11:53:55.108372 3184285 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:53:55.114186 3184285 out.go:203] 
	W1217 11:53:55.117966 3184285 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061049s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 11:53:55.118025 3184285 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 11:53:55.118053 3184285 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 11:53:55.121856 3184285 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 11:45:33 no-preload-118262 containerd[757]: time="2025-12-17T11:45:33.505204167Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:34 no-preload-118262 containerd[757]: time="2025-12-17T11:45:34.939956582Z" level=info msg="No images store for sha256:93523640e0a56d4e8b1c8a3497b218ff0cad45dc41c5de367125514543645a73"
	Dec 17 11:45:34 no-preload-118262 containerd[757]: time="2025-12-17T11:45:34.942293323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\""
	Dec 17 11:45:34 no-preload-118262 containerd[757]: time="2025-12-17T11:45:34.949096337Z" level=info msg="ImageCreate event name:\"sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:34 no-preload-118262 containerd[757]: time="2025-12-17T11:45:34.949777101Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:36 no-preload-118262 containerd[757]: time="2025-12-17T11:45:36.535897084Z" level=info msg="No images store for sha256:e78123e3dd3a833d4e1feffb3fc0a121f3dd689abacf9b7f8984f026b95c56ec"
	Dec 17 11:45:36 no-preload-118262 containerd[757]: time="2025-12-17T11:45:36.538753041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\""
	Dec 17 11:45:36 no-preload-118262 containerd[757]: time="2025-12-17T11:45:36.553736301Z" level=info msg="ImageCreate event name:\"sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:36 no-preload-118262 containerd[757]: time="2025-12-17T11:45:36.554961027Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:38 no-preload-118262 containerd[757]: time="2025-12-17T11:45:38.053706291Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 17 11:45:38 no-preload-118262 containerd[757]: time="2025-12-17T11:45:38.056518475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 17 11:45:38 no-preload-118262 containerd[757]: time="2025-12-17T11:45:38.074021896Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:38 no-preload-118262 containerd[757]: time="2025-12-17T11:45:38.075564022Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:39 no-preload-118262 containerd[757]: time="2025-12-17T11:45:39.899212727Z" level=info msg="No images store for sha256:78d3927c747311a5af27ec923ab6d07a2c1ad9cff4754323abf6c5c08cf054a5"
	Dec 17 11:45:39 no-preload-118262 containerd[757]: time="2025-12-17T11:45:39.902323078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\""
	Dec 17 11:45:39 no-preload-118262 containerd[757]: time="2025-12-17T11:45:39.911666939Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:39 no-preload-118262 containerd[757]: time="2025-12-17T11:45:39.912612705Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:41 no-preload-118262 containerd[757]: time="2025-12-17T11:45:41.544997473Z" level=info msg="No images store for sha256:90c4ca45066b118d6cc8f6102ba2fea77739b71c04f0bdafeef225127738ea35"
	Dec 17 11:45:41 no-preload-118262 containerd[757]: time="2025-12-17T11:45:41.548274171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\""
	Dec 17 11:45:41 no-preload-118262 containerd[757]: time="2025-12-17T11:45:41.559283871Z" level=info msg="ImageCreate event name:\"sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:41 no-preload-118262 containerd[757]: time="2025-12-17T11:45:41.561918164Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:42 no-preload-118262 containerd[757]: time="2025-12-17T11:45:42.580017138Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 17 11:45:42 no-preload-118262 containerd[757]: time="2025-12-17T11:45:42.582800563Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 17 11:45:42 no-preload-118262 containerd[757]: time="2025-12-17T11:45:42.590535248Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:42 no-preload-118262 containerd[757]: time="2025-12-17T11:45:42.590987315Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 11:53:57.731536    5656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:57.732380    5656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:57.734157    5656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:57.734756    5656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:57.736349    5656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +26.532481] overlayfs: idmapped layers are currently not supported
	[Dec17 09:26] overlayfs: idmapped layers are currently not supported
	[Dec17 09:27] overlayfs: idmapped layers are currently not supported
	[Dec17 09:29] overlayfs: idmapped layers are currently not supported
	[Dec17 09:31] overlayfs: idmapped layers are currently not supported
	[Dec17 09:41] overlayfs: idmapped layers are currently not supported
	[Dec17 09:43] overlayfs: idmapped layers are currently not supported
	[Dec17 09:44] overlayfs: idmapped layers are currently not supported
	[  +5.066669] overlayfs: idmapped layers are currently not supported
	[ +38.827173] overlayfs: idmapped layers are currently not supported
	[Dec17 09:45] overlayfs: idmapped layers are currently not supported
	[Dec17 09:46] overlayfs: idmapped layers are currently not supported
	[Dec17 09:48] overlayfs: idmapped layers are currently not supported
	[  +5.468161] overlayfs: idmapped layers are currently not supported
	[Dec17 09:49] overlayfs: idmapped layers are currently not supported
	[  +4.263444] overlayfs: idmapped layers are currently not supported
	[Dec17 09:50] overlayfs: idmapped layers are currently not supported
	[Dec17 10:07] overlayfs: idmapped layers are currently not supported
	[Dec17 10:08] overlayfs: idmapped layers are currently not supported
	[Dec17 10:10] overlayfs: idmapped layers are currently not supported
	[Dec17 10:11] overlayfs: idmapped layers are currently not supported
	[Dec17 10:13] overlayfs: idmapped layers are currently not supported
	[Dec17 10:15] overlayfs: idmapped layers are currently not supported
	[Dec17 10:16] overlayfs: idmapped layers are currently not supported
	[Dec17 10:21] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 11:53:57 up 17:36,  0 user,  load average: 0.81, 1.21, 1.78
	Linux no-preload-118262 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 11:53:54 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:53:55 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 17 11:53:55 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:53:55 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:53:55 no-preload-118262 kubelet[5421]: E1217 11:53:55.325373    5421 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:53:55 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:53:55 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:53:55 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 17 11:53:55 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:53:56 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:53:56 no-preload-118262 kubelet[5490]: E1217 11:53:56.060640    5490 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:53:56 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:53:56 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:53:56 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 17 11:53:56 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:53:56 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:53:56 no-preload-118262 kubelet[5554]: E1217 11:53:56.817621    5554 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:53:56 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:53:56 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:53:57 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 17 11:53:57 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:53:57 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:53:57 no-preload-118262 kubelet[5610]: E1217 11:53:57.563657    5610 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:53:57 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:53:57 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-118262 -n no-preload-118262
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-118262 -n no-preload-118262: exit status 6 (338.025856ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 11:53:58.180489 3210138 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-118262" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-118262" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-118262
helpers_test.go:244: (dbg) docker inspect no-preload-118262:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362",
	        "Created": "2025-12-17T11:45:23.889791979Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3184585,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:45:23.975335333Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/hostname",
	        "HostsPath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/hosts",
	        "LogPath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362-json.log",
	        "Name": "/no-preload-118262",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-118262:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-118262",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362",
	                "LowerDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82-init/diff:/var/lib/docker/overlay2/aa1c3cb837db05afa9c265c464cc269fa9c11658f422c1c8858e1287ac952f12/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-118262",
	                "Source": "/var/lib/docker/volumes/no-preload-118262/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-118262",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-118262",
	                "name.minikube.sigs.k8s.io": "no-preload-118262",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "edfd8c84516eb23c0ad2b26b7726367c3e837ddca981000c80312ea31fd9a26a",
	            "SandboxKey": "/var/run/docker/netns/edfd8c84516e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36018"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36019"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36022"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36020"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36021"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-118262": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:56:4e:97:d8:37",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3227851744df2bdac9c367dc789ddfe2892f877b7b9b947cdcd81cb2897c4ba1",
	                    "EndpointID": "b3f5ff720ab2b961fc2a2904ac219198576784cb510a37f2350f10bf17783082",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-118262",
	                        "4578079103f7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-118262 -n no-preload-118262
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-118262 -n no-preload-118262: exit status 6 (337.727032ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 11:53:58.537407 3210223 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-118262" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-118262 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ start   │ -p no-preload-118262 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:45 UTC │                     │
	│ start   │ -p cert-expiration-182607 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                          │ cert-expiration-182607       │ jenkins │ v1.37.0 │ 17 Dec 25 11:45 UTC │ 17 Dec 25 11:45 UTC │
	│ delete  │ -p cert-expiration-182607                                                                                                                                                                                                                                │ cert-expiration-182607       │ jenkins │ v1.37.0 │ 17 Dec 25 11:45 UTC │ 17 Dec 25 11:45 UTC │
	│ start   │ -p embed-certs-628462 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:45 UTC │ 17 Dec 25 11:46 UTC │
	│ addons  │ enable metrics-server -p embed-certs-628462 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                 │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:46 UTC │ 17 Dec 25 11:46 UTC │
	│ stop    │ -p embed-certs-628462 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:46 UTC │ 17 Dec 25 11:47 UTC │
	│ addons  │ enable dashboard -p embed-certs-628462 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │ 17 Dec 25 11:47 UTC │
	│ start   │ -p embed-certs-628462 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │ 17 Dec 25 11:47 UTC │
	│ image   │ embed-certs-628462 image list --format=json                                                                                                                                                                                                              │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ pause   │ -p embed-certs-628462 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ unpause │ -p embed-certs-628462 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p embed-certs-628462                                                                                                                                                                                                                                    │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p embed-certs-628462                                                                                                                                                                                                                                    │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p disable-driver-mounts-003095                                                                                                                                                                                                                          │ disable-driver-mounts-003095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ start   │ -p default-k8s-diff-port-224095 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:49 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-224095 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ stop    │ -p default-k8s-diff-port-224095 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-224095 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ start   │ -p default-k8s-diff-port-224095 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:50 UTC │
	│ image   │ default-k8s-diff-port-224095 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ pause   │ -p default-k8s-diff-port-224095 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ unpause │ -p default-k8s-diff-port-224095 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ delete  │ -p default-k8s-diff-port-224095                                                                                                                                                                                                                          │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ delete  │ -p default-k8s-diff-port-224095                                                                                                                                                                                                                          │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ start   │ -p newest-cni-669680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:50:33
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:50:33.770675 3204903 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:50:33.770892 3204903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:50:33.770930 3204903 out.go:374] Setting ErrFile to fd 2...
	I1217 11:50:33.770950 3204903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:50:33.771242 3204903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 11:50:33.771720 3204903 out.go:368] Setting JSON to false
	I1217 11:50:33.772826 3204903 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":63184,"bootTime":1765909050,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 11:50:33.772934 3204903 start.go:143] virtualization:  
	I1217 11:50:33.777422 3204903 out.go:179] * [newest-cni-669680] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 11:50:33.781147 3204903 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 11:50:33.781258 3204903 notify.go:221] Checking for updates...
	I1217 11:50:33.787770 3204903 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:50:33.790969 3204903 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 11:50:33.794108 3204903 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 11:50:33.797396 3204903 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 11:50:33.800914 3204903 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:50:33.804694 3204903 config.go:182] Loaded profile config "no-preload-118262": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:50:33.804819 3204903 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:50:33.836693 3204903 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 11:50:33.836824 3204903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:50:33.905198 3204903 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:50:33.886446399 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:50:33.905310 3204903 docker.go:319] overlay module found
	I1217 11:50:33.908522 3204903 out.go:179] * Using the docker driver based on user configuration
	I1217 11:50:33.911483 3204903 start.go:309] selected driver: docker
	I1217 11:50:33.911512 3204903 start.go:927] validating driver "docker" against <nil>
	I1217 11:50:33.911528 3204903 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:50:33.912303 3204903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:50:33.968344 3204903 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:50:33.958386366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:50:33.968600 3204903 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1217 11:50:33.968643 3204903 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1217 11:50:33.968883 3204903 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 11:50:33.971848 3204903 out.go:179] * Using Docker driver with root privileges
	I1217 11:50:33.974707 3204903 cni.go:84] Creating CNI manager for ""
	I1217 11:50:33.974785 3204903 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 11:50:33.974803 3204903 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 11:50:33.974912 3204903 start.go:353] cluster config:
	{Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:50:33.980004 3204903 out.go:179] * Starting "newest-cni-669680" primary control-plane node in "newest-cni-669680" cluster
	I1217 11:50:33.982917 3204903 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 11:50:33.985952 3204903 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:50:33.988892 3204903 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 11:50:33.988945 3204903 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1217 11:50:33.988990 3204903 cache.go:65] Caching tarball of preloaded images
	I1217 11:50:33.989015 3204903 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:50:33.989111 3204903 preload.go:238] Found /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 11:50:33.989123 3204903 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1217 11:50:33.989239 3204903 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/config.json ...
	I1217 11:50:33.989268 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/config.json: {Name:mk0a64d844d14a82596feb52de4f9f10fa21ee9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:34.014470 3204903 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:50:34.014499 3204903 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 11:50:34.014516 3204903 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:50:34.014550 3204903 start.go:360] acquireMachinesLock for newest-cni-669680: {Name:mk48c8383b245a4b70f2208fe2e76b80693bbb09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:50:34.014670 3204903 start.go:364] duration metric: took 97.672µs to acquireMachinesLock for "newest-cni-669680"
	I1217 11:50:34.014703 3204903 start.go:93] Provisioning new machine with config: &{Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 11:50:34.014791 3204903 start.go:125] createHost starting for "" (driver="docker")
	I1217 11:50:34.018329 3204903 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 11:50:34.018591 3204903 start.go:159] libmachine.API.Create for "newest-cni-669680" (driver="docker")
	I1217 11:50:34.018632 3204903 client.go:173] LocalClient.Create starting
	I1217 11:50:34.018712 3204903 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem
	I1217 11:50:34.018752 3204903 main.go:143] libmachine: Decoding PEM data...
	I1217 11:50:34.018777 3204903 main.go:143] libmachine: Parsing certificate...
	I1217 11:50:34.018837 3204903 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem
	I1217 11:50:34.018864 3204903 main.go:143] libmachine: Decoding PEM data...
	I1217 11:50:34.018877 3204903 main.go:143] libmachine: Parsing certificate...
	I1217 11:50:34.019266 3204903 cli_runner.go:164] Run: docker network inspect newest-cni-669680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 11:50:34.036819 3204903 cli_runner.go:211] docker network inspect newest-cni-669680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 11:50:34.036904 3204903 network_create.go:284] running [docker network inspect newest-cni-669680] to gather additional debugging logs...
	I1217 11:50:34.036925 3204903 cli_runner.go:164] Run: docker network inspect newest-cni-669680
	W1217 11:50:34.053220 3204903 cli_runner.go:211] docker network inspect newest-cni-669680 returned with exit code 1
	I1217 11:50:34.053254 3204903 network_create.go:287] error running [docker network inspect newest-cni-669680]: docker network inspect newest-cni-669680: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-669680 not found
	I1217 11:50:34.053268 3204903 network_create.go:289] output of [docker network inspect newest-cni-669680]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-669680 not found
	
	** /stderr **
	I1217 11:50:34.053385 3204903 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:50:34.073660 3204903 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f429477a79c4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:ea:a9:f2:52:01} reservation:<nil>}
	I1217 11:50:34.074039 3204903 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e0545776686c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:70:9e:49:ed:7d} reservation:<nil>}
	I1217 11:50:34.074407 3204903 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-279becfad84b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:b7:62:6e:a9:ee} reservation:<nil>}
	I1217 11:50:34.074906 3204903 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a004b0}
	I1217 11:50:34.074944 3204903 network_create.go:124] attempt to create docker network newest-cni-669680 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1217 11:50:34.075027 3204903 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-669680 newest-cni-669680
	I1217 11:50:34.133594 3204903 network_create.go:108] docker network newest-cni-669680 192.168.76.0/24 created
	I1217 11:50:34.133624 3204903 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-669680" container
	I1217 11:50:34.133717 3204903 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 11:50:34.150255 3204903 cli_runner.go:164] Run: docker volume create newest-cni-669680 --label name.minikube.sigs.k8s.io=newest-cni-669680 --label created_by.minikube.sigs.k8s.io=true
	I1217 11:50:34.168619 3204903 oci.go:103] Successfully created a docker volume newest-cni-669680
	I1217 11:50:34.168718 3204903 cli_runner.go:164] Run: docker run --rm --name newest-cni-669680-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-669680 --entrypoint /usr/bin/test -v newest-cni-669680:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 11:50:34.733678 3204903 oci.go:107] Successfully prepared a docker volume newest-cni-669680
	I1217 11:50:34.733775 3204903 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 11:50:34.733794 3204903 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 11:50:34.733863 3204903 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-669680:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 11:50:38.835811 3204903 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-669680:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.101893526s)
	I1217 11:50:38.835847 3204903 kic.go:203] duration metric: took 4.10204956s to extract preloaded images to volume ...
	W1217 11:50:38.835990 3204903 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 11:50:38.836106 3204903 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 11:50:38.889030 3204903 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-669680 --name newest-cni-669680 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-669680 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-669680 --network newest-cni-669680 --ip 192.168.76.2 --volume newest-cni-669680:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 11:50:39.198138 3204903 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Running}}
	I1217 11:50:39.220630 3204903 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 11:50:39.245099 3204903 cli_runner.go:164] Run: docker exec newest-cni-669680 stat /var/lib/dpkg/alternatives/iptables
	I1217 11:50:39.296762 3204903 oci.go:144] the created container "newest-cni-669680" has a running status.
	I1217 11:50:39.296821 3204903 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519...
	I1217 11:50:39.301246 3204903 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519.pub --> /home/docker/.ssh/authorized_keys (81 bytes)
	I1217 11:50:39.326812 3204903 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 11:50:39.353133 3204903 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 11:50:39.353152 3204903 kic_runner.go:114] Args: [docker exec --privileged newest-cni-669680 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 11:50:39.407725 3204903 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 11:50:39.427651 3204903 machine.go:94] provisionDockerMachine start ...
	I1217 11:50:39.427814 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:39.449037 3204903 main.go:143] libmachine: Using SSH client type: native
	I1217 11:50:39.449153 3204903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36043 <nil> <nil>}
	I1217 11:50:39.449161 3204903 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:50:39.449689 3204903 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46730->127.0.0.1:36043: read: connection reset by peer
	I1217 11:50:42.588075 3204903 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-669680
	
	I1217 11:50:42.588101 3204903 ubuntu.go:182] provisioning hostname "newest-cni-669680"
	I1217 11:50:42.588181 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:42.610896 3204903 main.go:143] libmachine: Using SSH client type: native
	I1217 11:50:42.611004 3204903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36043 <nil> <nil>}
	I1217 11:50:42.611019 3204903 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-669680 && echo "newest-cni-669680" | sudo tee /etc/hostname
	I1217 11:50:42.758221 3204903 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-669680
	
	I1217 11:50:42.758323 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:42.776901 3204903 main.go:143] libmachine: Using SSH client type: native
	I1217 11:50:42.777030 3204903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36043 <nil> <nil>}
	I1217 11:50:42.777054 3204903 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-669680' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-669680/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-669680' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:50:42.909042 3204903 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:50:42.909069 3204903 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 11:50:42.909089 3204903 ubuntu.go:190] setting up certificates
	I1217 11:50:42.909098 3204903 provision.go:84] configureAuth start
	I1217 11:50:42.909162 3204903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 11:50:42.927260 3204903 provision.go:143] copyHostCerts
	I1217 11:50:42.927326 3204903 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 11:50:42.927335 3204903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 11:50:42.927414 3204903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 11:50:42.927515 3204903 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 11:50:42.927521 3204903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 11:50:42.927546 3204903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 11:50:42.927611 3204903 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 11:50:42.927615 3204903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 11:50:42.927639 3204903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 11:50:42.927694 3204903 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.newest-cni-669680 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-669680]
	I1217 11:50:43.131974 3204903 provision.go:177] copyRemoteCerts
	I1217 11:50:43.132056 3204903 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:50:43.132097 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:43.150139 3204903 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36043 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 11:50:43.244232 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 11:50:43.264226 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 11:50:43.288821 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 11:50:43.309853 3204903 provision.go:87] duration metric: took 400.734271ms to configureAuth
	I1217 11:50:43.309977 3204903 ubuntu.go:206] setting minikube options for container-runtime
	I1217 11:50:43.310242 3204903 config.go:182] Loaded profile config "newest-cni-669680": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:50:43.310275 3204903 machine.go:97] duration metric: took 3.882542746s to provisionDockerMachine
	I1217 11:50:43.310297 3204903 client.go:176] duration metric: took 9.291651647s to LocalClient.Create
	I1217 11:50:43.310348 3204903 start.go:167] duration metric: took 9.291744428s to libmachine.API.Create "newest-cni-669680"
	I1217 11:50:43.310376 3204903 start.go:293] postStartSetup for "newest-cni-669680" (driver="docker")
	I1217 11:50:43.310413 3204903 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 11:50:43.310526 3204903 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 11:50:43.310608 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:43.332854 3204903 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36043 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 11:50:43.428662 3204903 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 11:50:43.432294 3204903 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 11:50:43.432325 3204903 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 11:50:43.432337 3204903 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 11:50:43.432397 3204903 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 11:50:43.432547 3204903 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 11:50:43.432651 3204903 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 11:50:43.440004 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 11:50:43.457337 3204903 start.go:296] duration metric: took 146.930324ms for postStartSetup
	I1217 11:50:43.457705 3204903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 11:50:43.474473 3204903 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/config.json ...
	I1217 11:50:43.474760 3204903 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:50:43.474809 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:43.491508 3204903 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36043 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 11:50:43.585952 3204903 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 11:50:43.591149 3204903 start.go:128] duration metric: took 9.576343313s to createHost
	I1217 11:50:43.591175 3204903 start.go:83] releasing machines lock for "newest-cni-669680", held for 9.576490895s
	I1217 11:50:43.591260 3204903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 11:50:43.609275 3204903 ssh_runner.go:195] Run: cat /version.json
	I1217 11:50:43.609319 3204903 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 11:50:43.609330 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:43.609377 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:43.631237 3204903 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36043 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 11:50:43.636805 3204903 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36043 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 11:50:43.724501 3204903 ssh_runner.go:195] Run: systemctl --version
	I1217 11:50:43.819531 3204903 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 11:50:43.823938 3204903 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 11:50:43.824018 3204903 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 11:50:43.852205 3204903 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 11:50:43.852281 3204903 start.go:496] detecting cgroup driver to use...
	I1217 11:50:43.852330 3204903 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 11:50:43.852407 3204903 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 11:50:43.867801 3204903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 11:50:43.881415 3204903 docker.go:218] disabling cri-docker service (if available) ...
	I1217 11:50:43.881505 3204903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 11:50:43.898869 3204903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 11:50:43.917331 3204903 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 11:50:44.042660 3204903 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 11:50:44.176397 3204903 docker.go:234] disabling docker service ...
	I1217 11:50:44.176490 3204903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 11:50:44.197465 3204903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 11:50:44.211041 3204903 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 11:50:44.324043 3204903 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 11:50:44.437310 3204903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 11:50:44.451253 3204903 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 11:50:44.468227 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 11:50:44.477660 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 11:50:44.487940 3204903 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 11:50:44.488046 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 11:50:44.497581 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 11:50:44.506638 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 11:50:44.516061 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 11:50:44.524921 3204903 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 11:50:44.533457 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 11:50:44.542606 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 11:50:44.551989 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 11:50:44.561578 3204903 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 11:50:44.570051 3204903 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 11:50:44.577822 3204903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:50:44.688867 3204903 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 11:50:44.840667 3204903 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 11:50:44.840788 3204903 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 11:50:44.845363 3204903 start.go:564] Will wait 60s for crictl version
	I1217 11:50:44.845485 3204903 ssh_runner.go:195] Run: which crictl
	I1217 11:50:44.849376 3204903 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 11:50:44.883387 3204903 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 11:50:44.883511 3204903 ssh_runner.go:195] Run: containerd --version
	I1217 11:50:44.905807 3204903 ssh_runner.go:195] Run: containerd --version
	I1217 11:50:44.930438 3204903 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1217 11:50:44.933446 3204903 cli_runner.go:164] Run: docker network inspect newest-cni-669680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:50:44.950378 3204903 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 11:50:44.954519 3204903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:50:44.968645 3204903 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 11:50:44.971593 3204903 kubeadm.go:884] updating cluster {Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:50:44.971744 3204903 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 11:50:44.971843 3204903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:50:45.011583 3204903 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 11:50:45.011607 3204903 containerd.go:534] Images already preloaded, skipping extraction
	I1217 11:50:45.011729 3204903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:50:45.074368 3204903 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 11:50:45.074393 3204903 cache_images.go:86] Images are preloaded, skipping loading
	I1217 11:50:45.074401 3204903 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1217 11:50:45.074511 3204903 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-669680 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 11:50:45.074583 3204903 ssh_runner.go:195] Run: sudo crictl info
	I1217 11:50:45.124774 3204903 cni.go:84] Creating CNI manager for ""
	I1217 11:50:45.124803 3204903 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 11:50:45.124823 3204903 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 11:50:45.124848 3204903 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-669680 NodeName:newest-cni-669680 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:50:45.125086 3204903 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-669680"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:50:45.125178 3204903 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 11:50:45.136963 3204903 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 11:50:45.137076 3204903 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:50:45.146865 3204903 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1217 11:50:45.164693 3204903 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 11:50:45.183943 3204903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1217 11:50:45.201780 3204903 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:50:45.209411 3204903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:50:45.227993 3204903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:50:45.376186 3204903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:50:45.396221 3204903 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680 for IP: 192.168.76.2
	I1217 11:50:45.396257 3204903 certs.go:195] generating shared ca certs ...
	I1217 11:50:45.396275 3204903 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:45.396432 3204903 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 11:50:45.396497 3204903 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 11:50:45.396511 3204903 certs.go:257] generating profile certs ...
	I1217 11:50:45.396576 3204903 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.key
	I1217 11:50:45.396594 3204903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.crt with IP's: []
	I1217 11:50:45.498992 3204903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.crt ...
	I1217 11:50:45.499023 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.crt: {Name:mkfb66bec095c72b7c1a0e563529baf2180c300c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:45.499228 3204903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.key ...
	I1217 11:50:45.499243 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.key: {Name:mk7292acf4e53dd5012d44cc923a43c80ae9a7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:45.499340 3204903 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key.e7646161
	I1217 11:50:45.499360 3204903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt.e7646161 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1217 11:50:45.885492 3204903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt.e7646161 ...
	I1217 11:50:45.885525 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt.e7646161: {Name:mkc2aab84e543777fe00770e300fac9f47cd579f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:45.885732 3204903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key.e7646161 ...
	I1217 11:50:45.885749 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key.e7646161: {Name:mk25ae271c13c745dd8ef046c320963d505be1ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:45.885837 3204903 certs.go:382] copying /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt.e7646161 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt
	I1217 11:50:45.885921 3204903 certs.go:386] copying /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key.e7646161 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key
	I1217 11:50:45.885986 3204903 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key
	I1217 11:50:45.886007 3204903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.crt with IP's: []
	I1217 11:50:46.187502 3204903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.crt ...
	I1217 11:50:46.187541 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.crt: {Name:mk12f9e3a4ac82afa8ef3e938731ab0419f581a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:46.187741 3204903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key ...
	I1217 11:50:46.187756 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key: {Name:mk258e31e31368b8ae182e758b28fd15f98dabb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:46.187958 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 11:50:46.188008 3204903 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 11:50:46.188031 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:50:46.188065 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 11:50:46.188095 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:50:46.188125 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 11:50:46.188174 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 11:50:46.188855 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:50:46.209179 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:50:46.229625 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:50:46.248348 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 11:50:46.280053 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 11:50:46.299540 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 11:50:46.331420 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:50:46.354812 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 11:50:46.379741 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:50:46.398349 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 11:50:46.416502 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 11:50:46.434656 3204903 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:50:46.448045 3204903 ssh_runner.go:195] Run: openssl version
	I1217 11:50:46.454404 3204903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:50:46.462383 3204903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:50:46.470220 3204903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:50:46.474117 3204903 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:50:46.474205 3204903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:50:46.515776 3204903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:50:46.523521 3204903 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 11:50:46.531167 3204903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 11:50:46.538808 3204903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 11:50:46.546526 3204903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 11:50:46.550351 3204903 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 11:50:46.550420 3204903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 11:50:46.591537 3204903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:50:46.599582 3204903 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2924574.pem /etc/ssl/certs/51391683.0
	I1217 11:50:46.607230 3204903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 11:50:46.615038 3204903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 11:50:46.623144 3204903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 11:50:46.627219 3204903 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 11:50:46.627293 3204903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 11:50:46.668548 3204903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:50:46.676480 3204903 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/29245742.pem /etc/ssl/certs/3ec20f2e.0
	I1217 11:50:46.684254 3204903 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:50:46.687989 3204903 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 11:50:46.688053 3204903 kubeadm.go:401] StartCluster: {Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:50:46.688186 3204903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 11:50:46.688251 3204903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:50:46.715504 3204903 cri.go:89] found id: ""
	I1217 11:50:46.715577 3204903 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:50:46.723636 3204903 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 11:50:46.731913 3204903 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 11:50:46.732013 3204903 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:50:46.740391 3204903 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 11:50:46.740486 3204903 kubeadm.go:158] found existing configuration files:
	
	I1217 11:50:46.740554 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 11:50:46.748658 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 11:50:46.748734 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 11:50:46.756251 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 11:50:46.764744 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 11:50:46.764812 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 11:50:46.772495 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 11:50:46.780304 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 11:50:46.780374 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:50:46.787858 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 11:50:46.795827 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 11:50:46.795920 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:50:46.803940 3204903 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 11:50:46.842300 3204903 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 11:50:46.842364 3204903 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 11:50:46.914982 3204903 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 11:50:46.915066 3204903 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 11:50:46.915120 3204903 kubeadm.go:319] OS: Linux
	I1217 11:50:46.915224 3204903 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 11:50:46.915306 3204903 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 11:50:46.915380 3204903 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 11:50:46.915458 3204903 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 11:50:46.915534 3204903 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 11:50:46.915612 3204903 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 11:50:46.915688 3204903 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 11:50:46.915760 3204903 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 11:50:46.915833 3204903 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 11:50:46.991927 3204903 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 11:50:46.992117 3204903 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 11:50:46.992264 3204903 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 11:50:47.011559 3204903 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 11:50:47.018032 3204903 out.go:252]   - Generating certificates and keys ...
	I1217 11:50:47.018195 3204903 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 11:50:47.018301 3204903 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 11:50:47.129470 3204903 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 11:50:47.445618 3204903 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 11:50:47.915158 3204903 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 11:50:48.499656 3204903 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 11:50:48.596834 3204903 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 11:50:48.597124 3204903 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-669680] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 11:50:48.753661 3204903 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 11:50:48.754010 3204903 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-669680] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 11:50:48.982189 3204903 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 11:50:49.176711 3204903 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 11:50:49.329925 3204903 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 11:50:49.330545 3204903 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 11:50:49.669219 3204903 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 11:50:49.769896 3204903 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 11:50:50.134620 3204903 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 11:50:50.518232 3204903 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 11:50:51.159536 3204903 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 11:50:51.160438 3204903 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 11:50:51.163380 3204903 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 11:50:51.167113 3204903 out.go:252]   - Booting up control plane ...
	I1217 11:50:51.167275 3204903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 11:50:51.167359 3204903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 11:50:51.168888 3204903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 11:50:51.187617 3204903 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 11:50:51.187958 3204903 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 11:50:51.195573 3204903 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 11:50:51.195900 3204903 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 11:50:51.195946 3204903 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 11:50:51.332866 3204903 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 11:50:51.332987 3204903 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 11:53:54.678520 3184285 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000061049s
	I1217 11:53:54.678562 3184285 kubeadm.go:319] 
	I1217 11:53:54.678668 3184285 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 11:53:54.678735 3184285 kubeadm.go:319] 	- The kubelet is not running
	I1217 11:53:54.679061 3184285 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 11:53:54.679078 3184285 kubeadm.go:319] 
	I1217 11:53:54.679259 3184285 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 11:53:54.679319 3184285 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 11:53:54.679610 3184285 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 11:53:54.679619 3184285 kubeadm.go:319] 
	I1217 11:53:54.684331 3184285 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 11:53:54.684946 3184285 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 11:53:54.685447 3184285 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:53:54.685728 3184285 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 11:53:54.685741 3184285 kubeadm.go:319] 
	I1217 11:53:54.685819 3184285 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 11:53:54.685877 3184285 kubeadm.go:403] duration metric: took 8m8.248541569s to StartCluster
	I1217 11:53:54.685915 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:53:54.685995 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:53:54.712731 3184285 cri.go:89] found id: ""
	I1217 11:53:54.712767 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.712778 3184285 logs.go:284] No container was found matching "kube-apiserver"
	I1217 11:53:54.712784 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:53:54.712847 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:53:54.738074 3184285 cri.go:89] found id: ""
	I1217 11:53:54.738101 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.738110 3184285 logs.go:284] No container was found matching "etcd"
	I1217 11:53:54.738116 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:53:54.738176 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:53:54.763115 3184285 cri.go:89] found id: ""
	I1217 11:53:54.763142 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.763151 3184285 logs.go:284] No container was found matching "coredns"
	I1217 11:53:54.763160 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:53:54.763223 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:53:54.788613 3184285 cri.go:89] found id: ""
	I1217 11:53:54.788637 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.788646 3184285 logs.go:284] No container was found matching "kube-scheduler"
	I1217 11:53:54.788652 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:53:54.788710 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:53:54.814171 3184285 cri.go:89] found id: ""
	I1217 11:53:54.814207 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.814216 3184285 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:53:54.814222 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:53:54.814287 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:53:54.839339 3184285 cri.go:89] found id: ""
	I1217 11:53:54.839362 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.839370 3184285 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 11:53:54.839376 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:53:54.839434 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:53:54.866460 3184285 cri.go:89] found id: ""
	I1217 11:53:54.866486 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.866495 3184285 logs.go:284] No container was found matching "kindnet"
	I1217 11:53:54.866505 3184285 logs.go:123] Gathering logs for container status ...
	I1217 11:53:54.866516 3184285 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:53:54.897933 3184285 logs.go:123] Gathering logs for kubelet ...
	I1217 11:53:54.897961 3184285 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:53:54.955505 3184285 logs.go:123] Gathering logs for dmesg ...
	I1217 11:53:54.955540 3184285 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:53:54.972937 3184285 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:53:54.972967 3184285 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:53:55.055017 3184285 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 11:53:55.043684    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.046958    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.047779    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.049563    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.050103    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 11:53:55.043684    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.046958    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.047779    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.049563    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.050103    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:53:55.055059 3184285 logs.go:123] Gathering logs for containerd ...
	I1217 11:53:55.055072 3184285 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1217 11:53:55.106009 3184285 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061049s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 11:53:55.106084 3184285 out.go:285] * 
	W1217 11:53:55.106176 3184285 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061049s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 11:53:55.106224 3184285 out.go:285] * 
	W1217 11:53:55.108372 3184285 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:53:55.114186 3184285 out.go:203] 
	W1217 11:53:55.117966 3184285 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061049s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 11:53:55.118025 3184285 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 11:53:55.118053 3184285 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 11:53:55.121856 3184285 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 11:45:33 no-preload-118262 containerd[757]: time="2025-12-17T11:45:33.505204167Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:34 no-preload-118262 containerd[757]: time="2025-12-17T11:45:34.939956582Z" level=info msg="No images store for sha256:93523640e0a56d4e8b1c8a3497b218ff0cad45dc41c5de367125514543645a73"
	Dec 17 11:45:34 no-preload-118262 containerd[757]: time="2025-12-17T11:45:34.942293323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\""
	Dec 17 11:45:34 no-preload-118262 containerd[757]: time="2025-12-17T11:45:34.949096337Z" level=info msg="ImageCreate event name:\"sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:34 no-preload-118262 containerd[757]: time="2025-12-17T11:45:34.949777101Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:36 no-preload-118262 containerd[757]: time="2025-12-17T11:45:36.535897084Z" level=info msg="No images store for sha256:e78123e3dd3a833d4e1feffb3fc0a121f3dd689abacf9b7f8984f026b95c56ec"
	Dec 17 11:45:36 no-preload-118262 containerd[757]: time="2025-12-17T11:45:36.538753041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\""
	Dec 17 11:45:36 no-preload-118262 containerd[757]: time="2025-12-17T11:45:36.553736301Z" level=info msg="ImageCreate event name:\"sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:36 no-preload-118262 containerd[757]: time="2025-12-17T11:45:36.554961027Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:38 no-preload-118262 containerd[757]: time="2025-12-17T11:45:38.053706291Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 17 11:45:38 no-preload-118262 containerd[757]: time="2025-12-17T11:45:38.056518475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 17 11:45:38 no-preload-118262 containerd[757]: time="2025-12-17T11:45:38.074021896Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:38 no-preload-118262 containerd[757]: time="2025-12-17T11:45:38.075564022Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:39 no-preload-118262 containerd[757]: time="2025-12-17T11:45:39.899212727Z" level=info msg="No images store for sha256:78d3927c747311a5af27ec923ab6d07a2c1ad9cff4754323abf6c5c08cf054a5"
	Dec 17 11:45:39 no-preload-118262 containerd[757]: time="2025-12-17T11:45:39.902323078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\""
	Dec 17 11:45:39 no-preload-118262 containerd[757]: time="2025-12-17T11:45:39.911666939Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:39 no-preload-118262 containerd[757]: time="2025-12-17T11:45:39.912612705Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:41 no-preload-118262 containerd[757]: time="2025-12-17T11:45:41.544997473Z" level=info msg="No images store for sha256:90c4ca45066b118d6cc8f6102ba2fea77739b71c04f0bdafeef225127738ea35"
	Dec 17 11:45:41 no-preload-118262 containerd[757]: time="2025-12-17T11:45:41.548274171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\""
	Dec 17 11:45:41 no-preload-118262 containerd[757]: time="2025-12-17T11:45:41.559283871Z" level=info msg="ImageCreate event name:\"sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:41 no-preload-118262 containerd[757]: time="2025-12-17T11:45:41.561918164Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:42 no-preload-118262 containerd[757]: time="2025-12-17T11:45:42.580017138Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 17 11:45:42 no-preload-118262 containerd[757]: time="2025-12-17T11:45:42.582800563Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 17 11:45:42 no-preload-118262 containerd[757]: time="2025-12-17T11:45:42.590535248Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:42 no-preload-118262 containerd[757]: time="2025-12-17T11:45:42.590987315Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 11:53:59.187126    5787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:59.187910    5787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:59.189537    5787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:59.189867    5787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:59.191352    5787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +26.532481] overlayfs: idmapped layers are currently not supported
	[Dec17 09:26] overlayfs: idmapped layers are currently not supported
	[Dec17 09:27] overlayfs: idmapped layers are currently not supported
	[Dec17 09:29] overlayfs: idmapped layers are currently not supported
	[Dec17 09:31] overlayfs: idmapped layers are currently not supported
	[Dec17 09:41] overlayfs: idmapped layers are currently not supported
	[Dec17 09:43] overlayfs: idmapped layers are currently not supported
	[Dec17 09:44] overlayfs: idmapped layers are currently not supported
	[  +5.066669] overlayfs: idmapped layers are currently not supported
	[ +38.827173] overlayfs: idmapped layers are currently not supported
	[Dec17 09:45] overlayfs: idmapped layers are currently not supported
	[Dec17 09:46] overlayfs: idmapped layers are currently not supported
	[Dec17 09:48] overlayfs: idmapped layers are currently not supported
	[  +5.468161] overlayfs: idmapped layers are currently not supported
	[Dec17 09:49] overlayfs: idmapped layers are currently not supported
	[  +4.263444] overlayfs: idmapped layers are currently not supported
	[Dec17 09:50] overlayfs: idmapped layers are currently not supported
	[Dec17 10:07] overlayfs: idmapped layers are currently not supported
	[Dec17 10:08] overlayfs: idmapped layers are currently not supported
	[Dec17 10:10] overlayfs: idmapped layers are currently not supported
	[Dec17 10:11] overlayfs: idmapped layers are currently not supported
	[Dec17 10:13] overlayfs: idmapped layers are currently not supported
	[Dec17 10:15] overlayfs: idmapped layers are currently not supported
	[Dec17 10:16] overlayfs: idmapped layers are currently not supported
	[Dec17 10:21] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 11:53:59 up 17:36,  0 user,  load average: 0.81, 1.21, 1.78
	Linux no-preload-118262 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 11:53:56 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:53:56 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 17 11:53:56 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:53:56 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:53:56 no-preload-118262 kubelet[5554]: E1217 11:53:56.817621    5554 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:53:56 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:53:56 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:53:57 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 17 11:53:57 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:53:57 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:53:57 no-preload-118262 kubelet[5610]: E1217 11:53:57.563657    5610 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:53:57 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:53:57 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:53:58 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 17 11:53:58 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:53:58 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:53:58 no-preload-118262 kubelet[5685]: E1217 11:53:58.316719    5685 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:53:58 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:53:58 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:53:58 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 17 11:53:58 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:53:59 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:53:59 no-preload-118262 kubelet[5758]: E1217 11:53:59.066145    5758 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:53:59 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:53:59 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-118262 -n no-preload-118262
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-118262 -n no-preload-118262: exit status 6 (343.800849ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 11:53:59.633680 3210446 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-118262" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-118262" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (2.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (112.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-118262 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1217 11:54:03.818915 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/default-k8s-diff-port-224095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:54:03.825443 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/default-k8s-diff-port-224095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:54:03.836951 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/default-k8s-diff-port-224095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:54:03.858467 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/default-k8s-diff-port-224095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:54:03.899925 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/default-k8s-diff-port-224095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:54:03.981401 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/default-k8s-diff-port-224095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:54:04.142988 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/default-k8s-diff-port-224095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:54:04.464798 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/default-k8s-diff-port-224095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:54:05.106883 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/default-k8s-diff-port-224095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:54:06.389077 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/default-k8s-diff-port-224095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:54:08.951108 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/default-k8s-diff-port-224095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:54:14.073057 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/default-k8s-diff-port-224095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:54:18.217461 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/old-k8s-version-112124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:54:24.314384 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/default-k8s-diff-port-224095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:54:28.205581 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:54:44.796282 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/default-k8s-diff-port-224095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:55:25.758658 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/default-k8s-diff-port-224095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:55:43.084765 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-118262 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m51.342943073s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-118262 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-118262 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-118262 describe deploy/metrics-server -n kube-system: exit status 1 (56.542728ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-118262" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-118262 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-118262
helpers_test.go:244: (dbg) docker inspect no-preload-118262:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362",
	        "Created": "2025-12-17T11:45:23.889791979Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3184585,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:45:23.975335333Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/hostname",
	        "HostsPath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/hosts",
	        "LogPath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362-json.log",
	        "Name": "/no-preload-118262",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-118262:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-118262",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362",
	                "LowerDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82-init/diff:/var/lib/docker/overlay2/aa1c3cb837db05afa9c265c464cc269fa9c11658f422c1c8858e1287ac952f12/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-118262",
	                "Source": "/var/lib/docker/volumes/no-preload-118262/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-118262",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-118262",
	                "name.minikube.sigs.k8s.io": "no-preload-118262",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "edfd8c84516eb23c0ad2b26b7726367c3e837ddca981000c80312ea31fd9a26a",
	            "SandboxKey": "/var/run/docker/netns/edfd8c84516e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36018"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36019"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36022"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36020"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36021"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-118262": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:56:4e:97:d8:37",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3227851744df2bdac9c367dc789ddfe2892f877b7b9b947cdcd81cb2897c4ba1",
	                    "EndpointID": "b3f5ff720ab2b961fc2a2904ac219198576784cb510a37f2350f10bf17783082",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-118262",
	                        "4578079103f7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-118262 -n no-preload-118262
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-118262 -n no-preload-118262: exit status 6 (307.445434ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 11:55:51.365628 3212473 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-118262" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-118262 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ start   │ -p cert-expiration-182607 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                          │ cert-expiration-182607       │ jenkins │ v1.37.0 │ 17 Dec 25 11:45 UTC │ 17 Dec 25 11:45 UTC │
	│ delete  │ -p cert-expiration-182607                                                                                                                                                                                                                                │ cert-expiration-182607       │ jenkins │ v1.37.0 │ 17 Dec 25 11:45 UTC │ 17 Dec 25 11:45 UTC │
	│ start   │ -p embed-certs-628462 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:45 UTC │ 17 Dec 25 11:46 UTC │
	│ addons  │ enable metrics-server -p embed-certs-628462 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                 │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:46 UTC │ 17 Dec 25 11:46 UTC │
	│ stop    │ -p embed-certs-628462 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:46 UTC │ 17 Dec 25 11:47 UTC │
	│ addons  │ enable dashboard -p embed-certs-628462 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │ 17 Dec 25 11:47 UTC │
	│ start   │ -p embed-certs-628462 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │ 17 Dec 25 11:47 UTC │
	│ image   │ embed-certs-628462 image list --format=json                                                                                                                                                                                                              │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ pause   │ -p embed-certs-628462 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ unpause │ -p embed-certs-628462 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p embed-certs-628462                                                                                                                                                                                                                                    │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p embed-certs-628462                                                                                                                                                                                                                                    │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p disable-driver-mounts-003095                                                                                                                                                                                                                          │ disable-driver-mounts-003095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ start   │ -p default-k8s-diff-port-224095 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:49 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-224095 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ stop    │ -p default-k8s-diff-port-224095 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-224095 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ start   │ -p default-k8s-diff-port-224095 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:50 UTC │
	│ image   │ default-k8s-diff-port-224095 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ pause   │ -p default-k8s-diff-port-224095 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ unpause │ -p default-k8s-diff-port-224095 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ delete  │ -p default-k8s-diff-port-224095                                                                                                                                                                                                                          │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ delete  │ -p default-k8s-diff-port-224095                                                                                                                                                                                                                          │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ start   │ -p newest-cni-669680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-118262 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:50:33
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:50:33.770675 3204903 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:50:33.770892 3204903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:50:33.770930 3204903 out.go:374] Setting ErrFile to fd 2...
	I1217 11:50:33.770950 3204903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:50:33.771242 3204903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 11:50:33.771720 3204903 out.go:368] Setting JSON to false
	I1217 11:50:33.772826 3204903 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":63184,"bootTime":1765909050,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 11:50:33.772934 3204903 start.go:143] virtualization:  
	I1217 11:50:33.777422 3204903 out.go:179] * [newest-cni-669680] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 11:50:33.781147 3204903 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 11:50:33.781258 3204903 notify.go:221] Checking for updates...
	I1217 11:50:33.787770 3204903 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:50:33.790969 3204903 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 11:50:33.794108 3204903 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 11:50:33.797396 3204903 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 11:50:33.800914 3204903 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:50:33.804694 3204903 config.go:182] Loaded profile config "no-preload-118262": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:50:33.804819 3204903 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:50:33.836693 3204903 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 11:50:33.836824 3204903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:50:33.905198 3204903 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:50:33.886446399 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:50:33.905310 3204903 docker.go:319] overlay module found
	I1217 11:50:33.908522 3204903 out.go:179] * Using the docker driver based on user configuration
	I1217 11:50:33.911483 3204903 start.go:309] selected driver: docker
	I1217 11:50:33.911512 3204903 start.go:927] validating driver "docker" against <nil>
	I1217 11:50:33.911528 3204903 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:50:33.912303 3204903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:50:33.968344 3204903 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:50:33.958386366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:50:33.968600 3204903 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1217 11:50:33.968643 3204903 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1217 11:50:33.968883 3204903 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 11:50:33.971848 3204903 out.go:179] * Using Docker driver with root privileges
	I1217 11:50:33.974707 3204903 cni.go:84] Creating CNI manager for ""
	I1217 11:50:33.974785 3204903 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 11:50:33.974803 3204903 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 11:50:33.974912 3204903 start.go:353] cluster config:
	{Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:50:33.980004 3204903 out.go:179] * Starting "newest-cni-669680" primary control-plane node in "newest-cni-669680" cluster
	I1217 11:50:33.982917 3204903 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 11:50:33.985952 3204903 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:50:33.988892 3204903 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 11:50:33.988945 3204903 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1217 11:50:33.988990 3204903 cache.go:65] Caching tarball of preloaded images
	I1217 11:50:33.989015 3204903 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:50:33.989111 3204903 preload.go:238] Found /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 11:50:33.989123 3204903 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1217 11:50:33.989239 3204903 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/config.json ...
	I1217 11:50:33.989268 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/config.json: {Name:mk0a64d844d14a82596feb52de4f9f10fa21ee9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:34.014470 3204903 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:50:34.014499 3204903 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 11:50:34.014516 3204903 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:50:34.014550 3204903 start.go:360] acquireMachinesLock for newest-cni-669680: {Name:mk48c8383b245a4b70f2208fe2e76b80693bbb09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:50:34.014670 3204903 start.go:364] duration metric: took 97.672µs to acquireMachinesLock for "newest-cni-669680"
	I1217 11:50:34.014703 3204903 start.go:93] Provisioning new machine with config: &{Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 11:50:34.014791 3204903 start.go:125] createHost starting for "" (driver="docker")
	I1217 11:50:34.018329 3204903 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 11:50:34.018591 3204903 start.go:159] libmachine.API.Create for "newest-cni-669680" (driver="docker")
	I1217 11:50:34.018632 3204903 client.go:173] LocalClient.Create starting
	I1217 11:50:34.018712 3204903 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem
	I1217 11:50:34.018752 3204903 main.go:143] libmachine: Decoding PEM data...
	I1217 11:50:34.018777 3204903 main.go:143] libmachine: Parsing certificate...
	I1217 11:50:34.018837 3204903 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem
	I1217 11:50:34.018864 3204903 main.go:143] libmachine: Decoding PEM data...
	I1217 11:50:34.018877 3204903 main.go:143] libmachine: Parsing certificate...
	I1217 11:50:34.019266 3204903 cli_runner.go:164] Run: docker network inspect newest-cni-669680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 11:50:34.036819 3204903 cli_runner.go:211] docker network inspect newest-cni-669680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 11:50:34.036904 3204903 network_create.go:284] running [docker network inspect newest-cni-669680] to gather additional debugging logs...
	I1217 11:50:34.036925 3204903 cli_runner.go:164] Run: docker network inspect newest-cni-669680
	W1217 11:50:34.053220 3204903 cli_runner.go:211] docker network inspect newest-cni-669680 returned with exit code 1
	I1217 11:50:34.053254 3204903 network_create.go:287] error running [docker network inspect newest-cni-669680]: docker network inspect newest-cni-669680: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-669680 not found
	I1217 11:50:34.053268 3204903 network_create.go:289] output of [docker network inspect newest-cni-669680]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-669680 not found
	
	** /stderr **
	I1217 11:50:34.053385 3204903 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:50:34.073660 3204903 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f429477a79c4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:ea:a9:f2:52:01} reservation:<nil>}
	I1217 11:50:34.074039 3204903 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e0545776686c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:70:9e:49:ed:7d} reservation:<nil>}
	I1217 11:50:34.074407 3204903 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-279becfad84b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:b7:62:6e:a9:ee} reservation:<nil>}
	I1217 11:50:34.074906 3204903 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a004b0}
	I1217 11:50:34.074944 3204903 network_create.go:124] attempt to create docker network newest-cni-669680 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1217 11:50:34.075027 3204903 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-669680 newest-cni-669680
	I1217 11:50:34.133594 3204903 network_create.go:108] docker network newest-cni-669680 192.168.76.0/24 created
	I1217 11:50:34.133624 3204903 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-669680" container
	I1217 11:50:34.133717 3204903 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 11:50:34.150255 3204903 cli_runner.go:164] Run: docker volume create newest-cni-669680 --label name.minikube.sigs.k8s.io=newest-cni-669680 --label created_by.minikube.sigs.k8s.io=true
	I1217 11:50:34.168619 3204903 oci.go:103] Successfully created a docker volume newest-cni-669680
	I1217 11:50:34.168718 3204903 cli_runner.go:164] Run: docker run --rm --name newest-cni-669680-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-669680 --entrypoint /usr/bin/test -v newest-cni-669680:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 11:50:34.733678 3204903 oci.go:107] Successfully prepared a docker volume newest-cni-669680
	I1217 11:50:34.733775 3204903 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 11:50:34.733794 3204903 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 11:50:34.733863 3204903 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-669680:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 11:50:38.835811 3204903 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-669680:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.101893526s)
	I1217 11:50:38.835847 3204903 kic.go:203] duration metric: took 4.10204956s to extract preloaded images to volume ...
	W1217 11:50:38.835990 3204903 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 11:50:38.836106 3204903 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 11:50:38.889030 3204903 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-669680 --name newest-cni-669680 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-669680 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-669680 --network newest-cni-669680 --ip 192.168.76.2 --volume newest-cni-669680:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 11:50:39.198138 3204903 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Running}}
	I1217 11:50:39.220630 3204903 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 11:50:39.245099 3204903 cli_runner.go:164] Run: docker exec newest-cni-669680 stat /var/lib/dpkg/alternatives/iptables
	I1217 11:50:39.296762 3204903 oci.go:144] the created container "newest-cni-669680" has a running status.
	I1217 11:50:39.296821 3204903 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519...
	I1217 11:50:39.301246 3204903 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519.pub --> /home/docker/.ssh/authorized_keys (81 bytes)
	I1217 11:50:39.326812 3204903 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 11:50:39.353133 3204903 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 11:50:39.353152 3204903 kic_runner.go:114] Args: [docker exec --privileged newest-cni-669680 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 11:50:39.407725 3204903 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 11:50:39.427651 3204903 machine.go:94] provisionDockerMachine start ...
	I1217 11:50:39.427814 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:39.449037 3204903 main.go:143] libmachine: Using SSH client type: native
	I1217 11:50:39.449153 3204903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36043 <nil> <nil>}
	I1217 11:50:39.449161 3204903 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:50:39.449689 3204903 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46730->127.0.0.1:36043: read: connection reset by peer
	I1217 11:50:42.588075 3204903 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-669680
	
	I1217 11:50:42.588101 3204903 ubuntu.go:182] provisioning hostname "newest-cni-669680"
	I1217 11:50:42.588181 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:42.610896 3204903 main.go:143] libmachine: Using SSH client type: native
	I1217 11:50:42.611004 3204903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36043 <nil> <nil>}
	I1217 11:50:42.611019 3204903 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-669680 && echo "newest-cni-669680" | sudo tee /etc/hostname
	I1217 11:50:42.758221 3204903 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-669680
	
	I1217 11:50:42.758323 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:42.776901 3204903 main.go:143] libmachine: Using SSH client type: native
	I1217 11:50:42.777030 3204903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36043 <nil> <nil>}
	I1217 11:50:42.777054 3204903 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-669680' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-669680/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-669680' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:50:42.909042 3204903 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:50:42.909069 3204903 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 11:50:42.909089 3204903 ubuntu.go:190] setting up certificates
	I1217 11:50:42.909098 3204903 provision.go:84] configureAuth start
	I1217 11:50:42.909162 3204903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 11:50:42.927260 3204903 provision.go:143] copyHostCerts
	I1217 11:50:42.927326 3204903 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 11:50:42.927335 3204903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 11:50:42.927414 3204903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 11:50:42.927515 3204903 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 11:50:42.927521 3204903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 11:50:42.927546 3204903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 11:50:42.927611 3204903 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 11:50:42.927615 3204903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 11:50:42.927639 3204903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 11:50:42.927694 3204903 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.newest-cni-669680 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-669680]
	I1217 11:50:43.131974 3204903 provision.go:177] copyRemoteCerts
	I1217 11:50:43.132056 3204903 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:50:43.132097 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:43.150139 3204903 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36043 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 11:50:43.244232 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 11:50:43.264226 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 11:50:43.288821 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 11:50:43.309853 3204903 provision.go:87] duration metric: took 400.734271ms to configureAuth
	I1217 11:50:43.309977 3204903 ubuntu.go:206] setting minikube options for container-runtime
	I1217 11:50:43.310242 3204903 config.go:182] Loaded profile config "newest-cni-669680": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:50:43.310275 3204903 machine.go:97] duration metric: took 3.882542746s to provisionDockerMachine
	I1217 11:50:43.310297 3204903 client.go:176] duration metric: took 9.291651647s to LocalClient.Create
	I1217 11:50:43.310348 3204903 start.go:167] duration metric: took 9.291744428s to libmachine.API.Create "newest-cni-669680"
	I1217 11:50:43.310376 3204903 start.go:293] postStartSetup for "newest-cni-669680" (driver="docker")
	I1217 11:50:43.310413 3204903 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 11:50:43.310526 3204903 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 11:50:43.310608 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:43.332854 3204903 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36043 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 11:50:43.428662 3204903 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 11:50:43.432294 3204903 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 11:50:43.432325 3204903 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 11:50:43.432337 3204903 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 11:50:43.432397 3204903 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 11:50:43.432547 3204903 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 11:50:43.432651 3204903 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 11:50:43.440004 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 11:50:43.457337 3204903 start.go:296] duration metric: took 146.930324ms for postStartSetup
	I1217 11:50:43.457705 3204903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 11:50:43.474473 3204903 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/config.json ...
	I1217 11:50:43.474760 3204903 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:50:43.474809 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:43.491508 3204903 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36043 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 11:50:43.585952 3204903 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 11:50:43.591149 3204903 start.go:128] duration metric: took 9.576343313s to createHost
	I1217 11:50:43.591175 3204903 start.go:83] releasing machines lock for "newest-cni-669680", held for 9.576490895s
	I1217 11:50:43.591260 3204903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 11:50:43.609275 3204903 ssh_runner.go:195] Run: cat /version.json
	I1217 11:50:43.609319 3204903 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 11:50:43.609330 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:43.609377 3204903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 11:50:43.631237 3204903 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36043 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 11:50:43.636805 3204903 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36043 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 11:50:43.724501 3204903 ssh_runner.go:195] Run: systemctl --version
	I1217 11:50:43.819531 3204903 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 11:50:43.823938 3204903 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 11:50:43.824018 3204903 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 11:50:43.852205 3204903 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 11:50:43.852281 3204903 start.go:496] detecting cgroup driver to use...
	I1217 11:50:43.852330 3204903 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 11:50:43.852407 3204903 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 11:50:43.867801 3204903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 11:50:43.881415 3204903 docker.go:218] disabling cri-docker service (if available) ...
	I1217 11:50:43.881505 3204903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 11:50:43.898869 3204903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 11:50:43.917331 3204903 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 11:50:44.042660 3204903 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 11:50:44.176397 3204903 docker.go:234] disabling docker service ...
	I1217 11:50:44.176490 3204903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 11:50:44.197465 3204903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 11:50:44.211041 3204903 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 11:50:44.324043 3204903 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 11:50:44.437310 3204903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 11:50:44.451253 3204903 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 11:50:44.468227 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 11:50:44.477660 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 11:50:44.487940 3204903 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 11:50:44.488046 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 11:50:44.497581 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 11:50:44.506638 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 11:50:44.516061 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 11:50:44.524921 3204903 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 11:50:44.533457 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 11:50:44.542606 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 11:50:44.551989 3204903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 11:50:44.561578 3204903 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 11:50:44.570051 3204903 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 11:50:44.577822 3204903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:50:44.688867 3204903 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 11:50:44.840667 3204903 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 11:50:44.840788 3204903 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 11:50:44.845363 3204903 start.go:564] Will wait 60s for crictl version
	I1217 11:50:44.845485 3204903 ssh_runner.go:195] Run: which crictl
	I1217 11:50:44.849376 3204903 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 11:50:44.883387 3204903 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 11:50:44.883511 3204903 ssh_runner.go:195] Run: containerd --version
	I1217 11:50:44.905807 3204903 ssh_runner.go:195] Run: containerd --version
	I1217 11:50:44.930438 3204903 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1217 11:50:44.933446 3204903 cli_runner.go:164] Run: docker network inspect newest-cni-669680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:50:44.950378 3204903 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 11:50:44.954519 3204903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:50:44.968645 3204903 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 11:50:44.971593 3204903 kubeadm.go:884] updating cluster {Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:50:44.971744 3204903 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 11:50:44.971843 3204903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:50:45.011583 3204903 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 11:50:45.011607 3204903 containerd.go:534] Images already preloaded, skipping extraction
	I1217 11:50:45.011729 3204903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:50:45.074368 3204903 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 11:50:45.074393 3204903 cache_images.go:86] Images are preloaded, skipping loading
	I1217 11:50:45.074401 3204903 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1217 11:50:45.074511 3204903 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-669680 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 11:50:45.074583 3204903 ssh_runner.go:195] Run: sudo crictl info
	I1217 11:50:45.124774 3204903 cni.go:84] Creating CNI manager for ""
	I1217 11:50:45.124803 3204903 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 11:50:45.124823 3204903 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 11:50:45.124848 3204903 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-669680 NodeName:newest-cni-669680 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:50:45.125086 3204903 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-669680"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:50:45.125178 3204903 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 11:50:45.136963 3204903 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 11:50:45.137076 3204903 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:50:45.146865 3204903 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1217 11:50:45.164693 3204903 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 11:50:45.183943 3204903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1217 11:50:45.201780 3204903 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:50:45.209411 3204903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:50:45.227993 3204903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:50:45.376186 3204903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:50:45.396221 3204903 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680 for IP: 192.168.76.2
	I1217 11:50:45.396257 3204903 certs.go:195] generating shared ca certs ...
	I1217 11:50:45.396275 3204903 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:45.396432 3204903 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 11:50:45.396497 3204903 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 11:50:45.396511 3204903 certs.go:257] generating profile certs ...
	I1217 11:50:45.396576 3204903 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.key
	I1217 11:50:45.396594 3204903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.crt with IP's: []
	I1217 11:50:45.498992 3204903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.crt ...
	I1217 11:50:45.499023 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.crt: {Name:mkfb66bec095c72b7c1a0e563529baf2180c300c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:45.499228 3204903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.key ...
	I1217 11:50:45.499243 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.key: {Name:mk7292acf4e53dd5012d44cc923a43c80ae9a7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:45.499340 3204903 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key.e7646161
	I1217 11:50:45.499360 3204903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt.e7646161 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1217 11:50:45.885492 3204903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt.e7646161 ...
	I1217 11:50:45.885525 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt.e7646161: {Name:mkc2aab84e543777fe00770e300fac9f47cd579f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:45.885732 3204903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key.e7646161 ...
	I1217 11:50:45.885749 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key.e7646161: {Name:mk25ae271c13c745dd8ef046c320963d505be1ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:45.885837 3204903 certs.go:382] copying /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt.e7646161 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt
	I1217 11:50:45.885921 3204903 certs.go:386] copying /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key.e7646161 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key
	I1217 11:50:45.885986 3204903 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key
	I1217 11:50:45.886007 3204903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.crt with IP's: []
	I1217 11:50:46.187502 3204903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.crt ...
	I1217 11:50:46.187541 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.crt: {Name:mk12f9e3a4ac82afa8ef3e938731ab0419f581a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:46.187741 3204903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key ...
	I1217 11:50:46.187756 3204903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key: {Name:mk258e31e31368b8ae182e758b28fd15f98dabb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:50:46.187958 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 11:50:46.188008 3204903 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 11:50:46.188031 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:50:46.188065 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 11:50:46.188095 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:50:46.188125 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 11:50:46.188174 3204903 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 11:50:46.188855 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:50:46.209179 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:50:46.229625 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:50:46.248348 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 11:50:46.280053 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 11:50:46.299540 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 11:50:46.331420 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:50:46.354812 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 11:50:46.379741 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:50:46.398349 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 11:50:46.416502 3204903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 11:50:46.434656 3204903 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:50:46.448045 3204903 ssh_runner.go:195] Run: openssl version
	I1217 11:50:46.454404 3204903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:50:46.462383 3204903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:50:46.470220 3204903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:50:46.474117 3204903 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:50:46.474205 3204903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:50:46.515776 3204903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:50:46.523521 3204903 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 11:50:46.531167 3204903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 11:50:46.538808 3204903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 11:50:46.546526 3204903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 11:50:46.550351 3204903 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 11:50:46.550420 3204903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 11:50:46.591537 3204903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:50:46.599582 3204903 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2924574.pem /etc/ssl/certs/51391683.0
	I1217 11:50:46.607230 3204903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 11:50:46.615038 3204903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 11:50:46.623144 3204903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 11:50:46.627219 3204903 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 11:50:46.627293 3204903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 11:50:46.668548 3204903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:50:46.676480 3204903 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/29245742.pem /etc/ssl/certs/3ec20f2e.0
	I1217 11:50:46.684254 3204903 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:50:46.687989 3204903 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 11:50:46.688053 3204903 kubeadm.go:401] StartCluster: {Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:50:46.688186 3204903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 11:50:46.688251 3204903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:50:46.715504 3204903 cri.go:89] found id: ""
	I1217 11:50:46.715577 3204903 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:50:46.723636 3204903 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 11:50:46.731913 3204903 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 11:50:46.732013 3204903 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:50:46.740391 3204903 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 11:50:46.740486 3204903 kubeadm.go:158] found existing configuration files:
	
	I1217 11:50:46.740554 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 11:50:46.748658 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 11:50:46.748734 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 11:50:46.756251 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 11:50:46.764744 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 11:50:46.764812 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 11:50:46.772495 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 11:50:46.780304 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 11:50:46.780374 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:50:46.787858 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 11:50:46.795827 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 11:50:46.795920 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:50:46.803940 3204903 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 11:50:46.842300 3204903 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 11:50:46.842364 3204903 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 11:50:46.914982 3204903 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 11:50:46.915066 3204903 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 11:50:46.915120 3204903 kubeadm.go:319] OS: Linux
	I1217 11:50:46.915224 3204903 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 11:50:46.915306 3204903 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 11:50:46.915380 3204903 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 11:50:46.915458 3204903 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 11:50:46.915534 3204903 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 11:50:46.915612 3204903 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 11:50:46.915688 3204903 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 11:50:46.915760 3204903 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 11:50:46.915833 3204903 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 11:50:46.991927 3204903 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 11:50:46.992117 3204903 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 11:50:46.992264 3204903 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 11:50:47.011559 3204903 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 11:50:47.018032 3204903 out.go:252]   - Generating certificates and keys ...
	I1217 11:50:47.018195 3204903 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 11:50:47.018301 3204903 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 11:50:47.129470 3204903 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 11:50:47.445618 3204903 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 11:50:47.915158 3204903 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 11:50:48.499656 3204903 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 11:50:48.596834 3204903 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 11:50:48.597124 3204903 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-669680] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 11:50:48.753661 3204903 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 11:50:48.754010 3204903 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-669680] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 11:50:48.982189 3204903 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 11:50:49.176711 3204903 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 11:50:49.329925 3204903 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 11:50:49.330545 3204903 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 11:50:49.669219 3204903 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 11:50:49.769896 3204903 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 11:50:50.134620 3204903 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 11:50:50.518232 3204903 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 11:50:51.159536 3204903 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 11:50:51.160438 3204903 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 11:50:51.163380 3204903 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 11:50:51.167113 3204903 out.go:252]   - Booting up control plane ...
	I1217 11:50:51.167275 3204903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 11:50:51.167359 3204903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 11:50:51.168888 3204903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 11:50:51.187617 3204903 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 11:50:51.187958 3204903 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 11:50:51.195573 3204903 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 11:50:51.195900 3204903 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 11:50:51.195946 3204903 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 11:50:51.332866 3204903 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 11:50:51.332987 3204903 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 11:53:54.678520 3184285 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000061049s
	I1217 11:53:54.678562 3184285 kubeadm.go:319] 
	I1217 11:53:54.678668 3184285 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 11:53:54.678735 3184285 kubeadm.go:319] 	- The kubelet is not running
	I1217 11:53:54.679061 3184285 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 11:53:54.679078 3184285 kubeadm.go:319] 
	I1217 11:53:54.679259 3184285 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 11:53:54.679319 3184285 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 11:53:54.679610 3184285 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 11:53:54.679619 3184285 kubeadm.go:319] 
	I1217 11:53:54.684331 3184285 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 11:53:54.684946 3184285 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 11:53:54.685447 3184285 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:53:54.685728 3184285 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 11:53:54.685741 3184285 kubeadm.go:319] 
	I1217 11:53:54.685819 3184285 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 11:53:54.685877 3184285 kubeadm.go:403] duration metric: took 8m8.248541569s to StartCluster
	I1217 11:53:54.685915 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:53:54.685995 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:53:54.712731 3184285 cri.go:89] found id: ""
	I1217 11:53:54.712767 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.712778 3184285 logs.go:284] No container was found matching "kube-apiserver"
	I1217 11:53:54.712784 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:53:54.712847 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:53:54.738074 3184285 cri.go:89] found id: ""
	I1217 11:53:54.738101 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.738110 3184285 logs.go:284] No container was found matching "etcd"
	I1217 11:53:54.738116 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:53:54.738176 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:53:54.763115 3184285 cri.go:89] found id: ""
	I1217 11:53:54.763142 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.763151 3184285 logs.go:284] No container was found matching "coredns"
	I1217 11:53:54.763160 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:53:54.763223 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:53:54.788613 3184285 cri.go:89] found id: ""
	I1217 11:53:54.788637 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.788646 3184285 logs.go:284] No container was found matching "kube-scheduler"
	I1217 11:53:54.788652 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:53:54.788710 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:53:54.814171 3184285 cri.go:89] found id: ""
	I1217 11:53:54.814207 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.814216 3184285 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:53:54.814222 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:53:54.814287 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:53:54.839339 3184285 cri.go:89] found id: ""
	I1217 11:53:54.839362 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.839370 3184285 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 11:53:54.839376 3184285 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:53:54.839434 3184285 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:53:54.866460 3184285 cri.go:89] found id: ""
	I1217 11:53:54.866486 3184285 logs.go:282] 0 containers: []
	W1217 11:53:54.866495 3184285 logs.go:284] No container was found matching "kindnet"
	I1217 11:53:54.866505 3184285 logs.go:123] Gathering logs for container status ...
	I1217 11:53:54.866516 3184285 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 11:53:54.897933 3184285 logs.go:123] Gathering logs for kubelet ...
	I1217 11:53:54.897961 3184285 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:53:54.955505 3184285 logs.go:123] Gathering logs for dmesg ...
	I1217 11:53:54.955540 3184285 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:53:54.972937 3184285 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:53:54.972967 3184285 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:53:55.055017 3184285 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 11:53:55.043684    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.046958    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.047779    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.049563    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.050103    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 11:53:55.043684    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.046958    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.047779    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.049563    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:53:55.050103    5414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:53:55.055059 3184285 logs.go:123] Gathering logs for containerd ...
	I1217 11:53:55.055072 3184285 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1217 11:53:55.106009 3184285 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061049s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 11:53:55.106084 3184285 out.go:285] * 
	W1217 11:53:55.106176 3184285 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061049s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 11:53:55.106224 3184285 out.go:285] * 
	W1217 11:53:55.108372 3184285 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:53:55.114186 3184285 out.go:203] 
	W1217 11:53:55.117966 3184285 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061049s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 11:53:55.118025 3184285 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 11:53:55.118053 3184285 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 11:53:55.121856 3184285 out.go:203] 
	I1217 11:54:51.328759 3204903 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001152207s
	I1217 11:54:51.328801 3204903 kubeadm.go:319] 
	I1217 11:54:51.328906 3204903 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 11:54:51.328965 3204903 kubeadm.go:319] 	- The kubelet is not running
	I1217 11:54:51.329441 3204903 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 11:54:51.329455 3204903 kubeadm.go:319] 
	I1217 11:54:51.329644 3204903 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 11:54:51.329715 3204903 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 11:54:51.329900 3204903 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 11:54:51.329906 3204903 kubeadm.go:319] 
	I1217 11:54:51.334038 3204903 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 11:54:51.334499 3204903 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 11:54:51.334619 3204903 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:54:51.334877 3204903 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 11:54:51.334886 3204903 kubeadm.go:319] 
	I1217 11:54:51.334961 3204903 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1217 11:54:51.335076 3204903 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-669680] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-669680] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001152207s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 11:54:51.335168 3204903 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1217 11:54:51.745608 3204903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:54:51.758936 3204903 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 11:54:51.759045 3204903 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:54:51.767791 3204903 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 11:54:51.767821 3204903 kubeadm.go:158] found existing configuration files:
	
	I1217 11:54:51.767929 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 11:54:51.776485 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 11:54:51.776552 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 11:54:51.784313 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 11:54:51.792583 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 11:54:51.792692 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 11:54:51.800824 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 11:54:51.809125 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 11:54:51.809247 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:54:51.818264 3204903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 11:54:51.826373 3204903 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 11:54:51.826439 3204903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:54:51.834569 3204903 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 11:54:51.873437 3204903 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 11:54:51.873499 3204903 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 11:54:51.944757 3204903 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 11:54:51.944829 3204903 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 11:54:51.944868 3204903 kubeadm.go:319] OS: Linux
	I1217 11:54:51.944915 3204903 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 11:54:51.944965 3204903 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 11:54:51.945013 3204903 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 11:54:51.945062 3204903 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 11:54:51.945112 3204903 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 11:54:51.945161 3204903 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 11:54:51.945207 3204903 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 11:54:51.945256 3204903 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 11:54:51.945304 3204903 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 11:54:52.011393 3204903 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 11:54:52.011506 3204903 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 11:54:52.011597 3204903 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 11:54:52.018267 3204903 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 11:54:52.021826 3204903 out.go:252]   - Generating certificates and keys ...
	I1217 11:54:52.021926 3204903 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 11:54:52.022003 3204903 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 11:54:52.022098 3204903 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 11:54:52.022197 3204903 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 11:54:52.022313 3204903 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 11:54:52.022392 3204903 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 11:54:52.023051 3204903 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 11:54:52.023420 3204903 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 11:54:52.023720 3204903 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 11:54:52.024098 3204903 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 11:54:52.024395 3204903 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 11:54:52.024488 3204903 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 11:54:52.154533 3204903 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 11:54:52.254828 3204903 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 11:54:52.520215 3204903 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 11:54:52.620865 3204903 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 11:54:52.853590 3204903 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 11:54:52.854283 3204903 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 11:54:52.857519 3204903 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 11:54:52.860706 3204903 out.go:252]   - Booting up control plane ...
	I1217 11:54:52.860973 3204903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 11:54:52.861072 3204903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 11:54:52.862252 3204903 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 11:54:52.883837 3204903 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 11:54:52.883954 3204903 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 11:54:52.891508 3204903 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 11:54:52.891860 3204903 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 11:54:52.891912 3204903 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 11:54:53.027569 3204903 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 11:54:53.027697 3204903 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 11:45:33 no-preload-118262 containerd[757]: time="2025-12-17T11:45:33.505204167Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:34 no-preload-118262 containerd[757]: time="2025-12-17T11:45:34.939956582Z" level=info msg="No images store for sha256:93523640e0a56d4e8b1c8a3497b218ff0cad45dc41c5de367125514543645a73"
	Dec 17 11:45:34 no-preload-118262 containerd[757]: time="2025-12-17T11:45:34.942293323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\""
	Dec 17 11:45:34 no-preload-118262 containerd[757]: time="2025-12-17T11:45:34.949096337Z" level=info msg="ImageCreate event name:\"sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:34 no-preload-118262 containerd[757]: time="2025-12-17T11:45:34.949777101Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:36 no-preload-118262 containerd[757]: time="2025-12-17T11:45:36.535897084Z" level=info msg="No images store for sha256:e78123e3dd3a833d4e1feffb3fc0a121f3dd689abacf9b7f8984f026b95c56ec"
	Dec 17 11:45:36 no-preload-118262 containerd[757]: time="2025-12-17T11:45:36.538753041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\""
	Dec 17 11:45:36 no-preload-118262 containerd[757]: time="2025-12-17T11:45:36.553736301Z" level=info msg="ImageCreate event name:\"sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:36 no-preload-118262 containerd[757]: time="2025-12-17T11:45:36.554961027Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:38 no-preload-118262 containerd[757]: time="2025-12-17T11:45:38.053706291Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 17 11:45:38 no-preload-118262 containerd[757]: time="2025-12-17T11:45:38.056518475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 17 11:45:38 no-preload-118262 containerd[757]: time="2025-12-17T11:45:38.074021896Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:38 no-preload-118262 containerd[757]: time="2025-12-17T11:45:38.075564022Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:39 no-preload-118262 containerd[757]: time="2025-12-17T11:45:39.899212727Z" level=info msg="No images store for sha256:78d3927c747311a5af27ec923ab6d07a2c1ad9cff4754323abf6c5c08cf054a5"
	Dec 17 11:45:39 no-preload-118262 containerd[757]: time="2025-12-17T11:45:39.902323078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\""
	Dec 17 11:45:39 no-preload-118262 containerd[757]: time="2025-12-17T11:45:39.911666939Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:39 no-preload-118262 containerd[757]: time="2025-12-17T11:45:39.912612705Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:41 no-preload-118262 containerd[757]: time="2025-12-17T11:45:41.544997473Z" level=info msg="No images store for sha256:90c4ca45066b118d6cc8f6102ba2fea77739b71c04f0bdafeef225127738ea35"
	Dec 17 11:45:41 no-preload-118262 containerd[757]: time="2025-12-17T11:45:41.548274171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\""
	Dec 17 11:45:41 no-preload-118262 containerd[757]: time="2025-12-17T11:45:41.559283871Z" level=info msg="ImageCreate event name:\"sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:41 no-preload-118262 containerd[757]: time="2025-12-17T11:45:41.561918164Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:42 no-preload-118262 containerd[757]: time="2025-12-17T11:45:42.580017138Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 17 11:45:42 no-preload-118262 containerd[757]: time="2025-12-17T11:45:42.582800563Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 17 11:45:42 no-preload-118262 containerd[757]: time="2025-12-17T11:45:42.590535248Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 17 11:45:42 no-preload-118262 containerd[757]: time="2025-12-17T11:45:42.590987315Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 11:55:52.063814    6895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:55:52.064815    6895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:55:52.066590    6895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:55:52.067091    6895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:55:52.068836    6895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +26.532481] overlayfs: idmapped layers are currently not supported
	[Dec17 09:26] overlayfs: idmapped layers are currently not supported
	[Dec17 09:27] overlayfs: idmapped layers are currently not supported
	[Dec17 09:29] overlayfs: idmapped layers are currently not supported
	[Dec17 09:31] overlayfs: idmapped layers are currently not supported
	[Dec17 09:41] overlayfs: idmapped layers are currently not supported
	[Dec17 09:43] overlayfs: idmapped layers are currently not supported
	[Dec17 09:44] overlayfs: idmapped layers are currently not supported
	[  +5.066669] overlayfs: idmapped layers are currently not supported
	[ +38.827173] overlayfs: idmapped layers are currently not supported
	[Dec17 09:45] overlayfs: idmapped layers are currently not supported
	[Dec17 09:46] overlayfs: idmapped layers are currently not supported
	[Dec17 09:48] overlayfs: idmapped layers are currently not supported
	[  +5.468161] overlayfs: idmapped layers are currently not supported
	[Dec17 09:49] overlayfs: idmapped layers are currently not supported
	[  +4.263444] overlayfs: idmapped layers are currently not supported
	[Dec17 09:50] overlayfs: idmapped layers are currently not supported
	[Dec17 10:07] overlayfs: idmapped layers are currently not supported
	[Dec17 10:08] overlayfs: idmapped layers are currently not supported
	[Dec17 10:10] overlayfs: idmapped layers are currently not supported
	[Dec17 10:11] overlayfs: idmapped layers are currently not supported
	[Dec17 10:13] overlayfs: idmapped layers are currently not supported
	[Dec17 10:15] overlayfs: idmapped layers are currently not supported
	[Dec17 10:16] overlayfs: idmapped layers are currently not supported
	[Dec17 10:21] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 11:55:52 up 17:38,  0 user,  load average: 0.30, 0.89, 1.59
	Linux no-preload-118262 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 11:55:48 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:55:49 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 472.
	Dec 17 11:55:49 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:55:49 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:55:49 no-preload-118262 kubelet[6774]: E1217 11:55:49.311682    6774 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:55:49 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:55:49 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:55:50 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 473.
	Dec 17 11:55:50 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:55:50 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:55:50 no-preload-118262 kubelet[6779]: E1217 11:55:50.072791    6779 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:55:50 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:55:50 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:55:50 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 474.
	Dec 17 11:55:50 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:55:50 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:55:50 no-preload-118262 kubelet[6785]: E1217 11:55:50.841206    6785 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:55:50 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:55:50 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 11:55:51 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 475.
	Dec 17 11:55:51 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:55:51 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 11:55:51 no-preload-118262 kubelet[6811]: E1217 11:55:51.600240    6811 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 11:55:51 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 11:55:51 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-118262 -n no-preload-118262
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-118262 -n no-preload-118262: exit status 6 (354.693588ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 11:55:52.536162 3212690 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-118262" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-118262" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (112.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (370.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-118262 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
E1217 11:56:47.680860 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/default-k8s-diff-port-224095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:58:36.152526 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:58:46.159400 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:58:50.515703 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/old-k8s-version-112124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-118262 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 80 (6m8.731654332s)

                                                
                                                
-- stdout --
	* [no-preload-118262] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "no-preload-118262" primary control-plane node in "no-preload-118262" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:55:54.097672 3212985 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:55:54.097800 3212985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:55:54.097813 3212985 out.go:374] Setting ErrFile to fd 2...
	I1217 11:55:54.097821 3212985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:55:54.098207 3212985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 11:55:54.098683 3212985 out.go:368] Setting JSON to false
	I1217 11:55:54.100030 3212985 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":63504,"bootTime":1765909050,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 11:55:54.100100 3212985 start.go:143] virtualization:  
	I1217 11:55:54.103066 3212985 out.go:179] * [no-preload-118262] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 11:55:54.106891 3212985 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 11:55:54.107071 3212985 notify.go:221] Checking for updates...
	I1217 11:55:54.112925 3212985 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:55:54.115810 3212985 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 11:55:54.118639 3212985 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 11:55:54.121576 3212985 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 11:55:54.124535 3212985 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:55:54.127987 3212985 config.go:182] Loaded profile config "no-preload-118262": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:55:54.128592 3212985 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:55:54.156670 3212985 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 11:55:54.156789 3212985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:55:54.209915 3212985 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:55:54.200713336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:55:54.210021 3212985 docker.go:319] overlay module found
	I1217 11:55:54.213155 3212985 out.go:179] * Using the docker driver based on existing profile
	I1217 11:55:54.215964 3212985 start.go:309] selected driver: docker
	I1217 11:55:54.215988 3212985 start.go:927] validating driver "docker" against &{Name:no-preload-118262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-118262 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:55:54.216113 3212985 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:55:54.217029 3212985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:55:54.283254 3212985 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:55:54.273141828 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:55:54.283582 3212985 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:55:54.283614 3212985 cni.go:84] Creating CNI manager for ""
	I1217 11:55:54.283667 3212985 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 11:55:54.283720 3212985 start.go:353] cluster config:
	{Name:no-preload-118262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-118262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:55:54.286974 3212985 out.go:179] * Starting "no-preload-118262" primary control-plane node in "no-preload-118262" cluster
	I1217 11:55:54.289858 3212985 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 11:55:54.292806 3212985 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:55:54.295787 3212985 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 11:55:54.295891 3212985 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:55:54.295951 3212985 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/config.json ...
	I1217 11:55:54.296271 3212985 cache.go:107] acquiring lock: {Name:mk815fc0c67b76ed2ee0b075f6917d43e67b13d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.296357 3212985 cache.go:115] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1217 11:55:54.296371 3212985 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 114.017µs
	I1217 11:55:54.296388 3212985 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1217 11:55:54.296401 3212985 cache.go:107] acquiring lock: {Name:mk11644c35fa0d35fcf9d5a865af6c28a7df16d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.296484 3212985 cache.go:115] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1217 11:55:54.296496 3212985 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 96.622µs
	I1217 11:55:54.296503 3212985 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1217 11:55:54.296515 3212985 cache.go:107] acquiring lock: {Name:mk02712d952db0244ab56f62810e58a983831503 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.296551 3212985 cache.go:115] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1217 11:55:54.296561 3212985 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 47.679µs
	I1217 11:55:54.296569 3212985 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1217 11:55:54.296591 3212985 cache.go:107] acquiring lock: {Name:mk436387f099b91bd6762b69e3678ebc0f9561cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.296627 3212985 cache.go:115] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1217 11:55:54.296637 3212985 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 52.323µs
	I1217 11:55:54.296644 3212985 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1217 11:55:54.296653 3212985 cache.go:107] acquiring lock: {Name:mkf4cd732ad0857bbeaf7d91402ed78da15112e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.296678 3212985 cache.go:115] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1217 11:55:54.296683 3212985 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 32.098µs
	I1217 11:55:54.296690 3212985 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1217 11:55:54.296699 3212985 cache.go:107] acquiring lock: {Name:mka934c06f25efbc149ef4769eaae5adad4ea53a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.296728 3212985 cache.go:115] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1217 11:55:54.296733 3212985 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 35.873µs
	I1217 11:55:54.296739 3212985 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1217 11:55:54.296748 3212985 cache.go:107] acquiring lock: {Name:mkb53641077bc34de612e9b78566264ac82d9b73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.296778 3212985 cache.go:115] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1217 11:55:54.296787 3212985 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 39.884µs
	I1217 11:55:54.296793 3212985 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1217 11:55:54.296801 3212985 cache.go:107] acquiring lock: {Name:mkca0a51840ba852f371cde8bcc41ec807c30a00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.296838 3212985 cache.go:115] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1217 11:55:54.296847 3212985 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 46.588µs
	I1217 11:55:54.296856 3212985 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1217 11:55:54.296862 3212985 cache.go:87] Successfully saved all images to host disk.
	I1217 11:55:54.316030 3212985 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:55:54.316051 3212985 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 11:55:54.316070 3212985 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:55:54.316101 3212985 start.go:360] acquireMachinesLock for no-preload-118262: {Name:mka8b15d744256405cc79d3bb936a81c229c3b9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.316161 3212985 start.go:364] duration metric: took 39.77µs to acquireMachinesLock for "no-preload-118262"
	I1217 11:55:54.316185 3212985 start.go:96] Skipping create...Using existing machine configuration
	I1217 11:55:54.316190 3212985 fix.go:54] fixHost starting: 
	I1217 11:55:54.316490 3212985 cli_runner.go:164] Run: docker container inspect no-preload-118262 --format={{.State.Status}}
	I1217 11:55:54.332759 3212985 fix.go:112] recreateIfNeeded on no-preload-118262: state=Stopped err=<nil>
	W1217 11:55:54.332793 3212985 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 11:55:54.338085 3212985 out.go:252] * Restarting existing docker container for "no-preload-118262" ...
	I1217 11:55:54.338180 3212985 cli_runner.go:164] Run: docker start no-preload-118262
	I1217 11:55:54.606459 3212985 cli_runner.go:164] Run: docker container inspect no-preload-118262 --format={{.State.Status}}
	I1217 11:55:54.631006 3212985 kic.go:432] container "no-preload-118262" state is running.
	I1217 11:55:54.631393 3212985 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-118262
	I1217 11:55:54.652451 3212985 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/config.json ...
	I1217 11:55:54.652674 3212985 machine.go:94] provisionDockerMachine start ...
	I1217 11:55:54.652732 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:55:54.677707 3212985 main.go:143] libmachine: Using SSH client type: native
	I1217 11:55:54.677814 3212985 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36048 <nil> <nil>}
	I1217 11:55:54.677822 3212985 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:55:54.678839 3212985 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 11:55:57.812197 3212985 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-118262
	
	I1217 11:55:57.812222 3212985 ubuntu.go:182] provisioning hostname "no-preload-118262"
	I1217 11:55:57.812295 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:55:57.830852 3212985 main.go:143] libmachine: Using SSH client type: native
	I1217 11:55:57.830954 3212985 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36048 <nil> <nil>}
	I1217 11:55:57.830964 3212985 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-118262 && echo "no-preload-118262" | sudo tee /etc/hostname
	I1217 11:55:57.977757 3212985 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-118262
	
	I1217 11:55:57.977834 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:55:57.995318 3212985 main.go:143] libmachine: Using SSH client type: native
	I1217 11:55:57.995438 3212985 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36048 <nil> <nil>}
	I1217 11:55:57.995454 3212985 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-118262' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-118262/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-118262' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:55:58.129000 3212985 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:55:58.129026 3212985 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 11:55:58.129048 3212985 ubuntu.go:190] setting up certificates
	I1217 11:55:58.129058 3212985 provision.go:84] configureAuth start
	I1217 11:55:58.129137 3212985 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-118262
	I1217 11:55:58.146660 3212985 provision.go:143] copyHostCerts
	I1217 11:55:58.146727 3212985 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 11:55:58.146737 3212985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 11:55:58.146821 3212985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 11:55:58.146983 3212985 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 11:55:58.146989 3212985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 11:55:58.147018 3212985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 11:55:58.147082 3212985 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 11:55:58.147087 3212985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 11:55:58.147112 3212985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 11:55:58.147174 3212985 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.no-preload-118262 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-118262]
	I1217 11:55:58.677412 3212985 provision.go:177] copyRemoteCerts
	I1217 11:55:58.677487 3212985 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:55:58.677537 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:55:58.696153 3212985 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36048 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:55:58.796388 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 11:55:58.813975 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 11:55:58.831950 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 11:55:58.849412 3212985 provision.go:87] duration metric: took 720.33021ms to configureAuth
	I1217 11:55:58.849488 3212985 ubuntu.go:206] setting minikube options for container-runtime
	I1217 11:55:58.849743 3212985 config.go:182] Loaded profile config "no-preload-118262": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:55:58.849760 3212985 machine.go:97] duration metric: took 4.197077033s to provisionDockerMachine
	I1217 11:55:58.849769 3212985 start.go:293] postStartSetup for "no-preload-118262" (driver="docker")
	I1217 11:55:58.849784 3212985 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 11:55:58.849838 3212985 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 11:55:58.849879 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:55:58.867585 3212985 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36048 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:55:58.964748 3212985 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 11:55:58.968333 3212985 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 11:55:58.968360 3212985 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 11:55:58.968372 3212985 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 11:55:58.968454 3212985 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 11:55:58.968542 3212985 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 11:55:58.968640 3212985 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 11:55:58.976685 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 11:55:58.995461 3212985 start.go:296] duration metric: took 145.672692ms for postStartSetup
	I1217 11:55:58.995541 3212985 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:55:58.995586 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:55:59.019538 3212985 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36048 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:55:59.117541 3212985 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 11:55:59.122267 3212985 fix.go:56] duration metric: took 4.80606985s for fixHost
	I1217 11:55:59.122307 3212985 start.go:83] releasing machines lock for "no-preload-118262", held for 4.806123002s
	I1217 11:55:59.122382 3212985 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-118262
	I1217 11:55:59.141556 3212985 ssh_runner.go:195] Run: cat /version.json
	I1217 11:55:59.141603 3212985 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 11:55:59.141611 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:55:59.141660 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:55:59.164620 3212985 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36048 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:55:59.164771 3212985 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36048 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:55:59.351128 3212985 ssh_runner.go:195] Run: systemctl --version
	I1217 11:55:59.358084 3212985 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 11:55:59.362663 3212985 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 11:55:59.362766 3212985 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 11:55:59.371125 3212985 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 11:55:59.371153 3212985 start.go:496] detecting cgroup driver to use...
	I1217 11:55:59.371206 3212985 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 11:55:59.371277 3212985 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 11:55:59.389287 3212985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 11:55:59.403831 3212985 docker.go:218] disabling cri-docker service (if available) ...
	I1217 11:55:59.403893 3212985 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 11:55:59.419497 3212985 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 11:55:59.432548 3212985 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 11:55:59.542751 3212985 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 11:55:59.663663 3212985 docker.go:234] disabling docker service ...
	I1217 11:55:59.663734 3212985 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 11:55:59.680687 3212985 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 11:55:59.694833 3212985 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 11:55:59.829203 3212985 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 11:55:59.950677 3212985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 11:55:59.964080 3212985 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 11:55:59.978475 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 11:55:59.987229 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 11:55:59.996111 3212985 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 11:55:59.996210 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 11:56:00.040190 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 11:56:00.080003 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 11:56:00.111408 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 11:56:00.135837 3212985 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 11:56:00.154709 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 11:56:00.192639 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 11:56:00.215745 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 11:56:00.252832 3212985 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 11:56:00.276526 3212985 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 11:56:00.295796 3212985 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:56:00.437457 3212985 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 11:56:00.567606 3212985 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 11:56:00.567753 3212985 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 11:56:00.572865 3212985 start.go:564] Will wait 60s for crictl version
	I1217 11:56:00.572972 3212985 ssh_runner.go:195] Run: which crictl
	I1217 11:56:00.577625 3212985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 11:56:00.604322 3212985 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 11:56:00.604502 3212985 ssh_runner.go:195] Run: containerd --version
	I1217 11:56:00.631560 3212985 ssh_runner.go:195] Run: containerd --version
	I1217 11:56:00.656469 3212985 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1217 11:56:00.659351 3212985 cli_runner.go:164] Run: docker network inspect no-preload-118262 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:56:00.676349 3212985 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 11:56:00.680496 3212985 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:56:00.690917 3212985 kubeadm.go:884] updating cluster {Name:no-preload-118262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-118262 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:56:00.691048 3212985 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 11:56:00.691104 3212985 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:56:00.720555 3212985 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 11:56:00.720582 3212985 cache_images.go:86] Images are preloaded, skipping loading
	I1217 11:56:00.720590 3212985 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1217 11:56:00.720694 3212985 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-118262 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-118262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 11:56:00.720772 3212985 ssh_runner.go:195] Run: sudo crictl info
	I1217 11:56:00.749210 3212985 cni.go:84] Creating CNI manager for ""
	I1217 11:56:00.749238 3212985 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 11:56:00.749254 3212985 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 11:56:00.749310 3212985 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-118262 NodeName:no-preload-118262 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:56:00.749505 3212985 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-118262"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:56:00.749576 3212985 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 11:56:00.757442 3212985 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 11:56:00.757524 3212985 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:56:00.765473 3212985 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1217 11:56:00.778740 3212985 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 11:56:00.792394 3212985 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1217 11:56:00.806454 3212985 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:56:00.810279 3212985 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:56:00.820510 3212985 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:56:00.934464 3212985 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:56:00.950819 3212985 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262 for IP: 192.168.85.2
	I1217 11:56:00.950891 3212985 certs.go:195] generating shared ca certs ...
	I1217 11:56:00.950922 3212985 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:00.951114 3212985 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 11:56:00.951194 3212985 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 11:56:00.951232 3212985 certs.go:257] generating profile certs ...
	I1217 11:56:00.951382 3212985 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/client.key
	I1217 11:56:00.951530 3212985 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/apiserver.key.082f94c0
	I1217 11:56:00.951606 3212985 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/proxy-client.key
	I1217 11:56:00.951762 3212985 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 11:56:00.951827 3212985 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 11:56:00.951867 3212985 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:56:00.951923 3212985 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 11:56:00.952000 3212985 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:56:00.952049 3212985 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 11:56:00.952133 3212985 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 11:56:00.952760 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:56:00.978711 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:56:00.997524 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:56:01.017452 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 11:56:01.037516 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 11:56:01.055698 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 11:56:01.078786 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:56:01.098475 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 11:56:01.116977 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 11:56:01.136015 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:56:01.156004 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 11:56:01.175302 3212985 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:56:01.190197 3212985 ssh_runner.go:195] Run: openssl version
	I1217 11:56:01.197107 3212985 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 11:56:01.205490 3212985 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 11:56:01.214061 3212985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 11:56:01.218349 3212985 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 11:56:01.218423 3212985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 11:56:01.260525 3212985 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:56:01.268554 3212985 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:56:01.276397 3212985 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:56:01.284768 3212985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:56:01.289382 3212985 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:56:01.289516 3212985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:56:01.331248 3212985 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:56:01.338774 3212985 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 11:56:01.346651 3212985 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 11:56:01.354564 3212985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 11:56:01.358698 3212985 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 11:56:01.358775 3212985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 11:56:01.400939 3212985 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:56:01.408692 3212985 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:56:01.412548 3212985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 11:56:01.453899 3212985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 11:56:01.495014 3212985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 11:56:01.536150 3212985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 11:56:01.577723 3212985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 11:56:01.619271 3212985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 11:56:01.660657 3212985 kubeadm.go:401] StartCluster: {Name:no-preload-118262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-118262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:56:01.660750 3212985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 11:56:01.660833 3212985 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:56:01.687958 3212985 cri.go:89] found id: ""
	I1217 11:56:01.688081 3212985 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:56:01.696230 3212985 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 11:56:01.696252 3212985 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 11:56:01.696304 3212985 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 11:56:01.704102 3212985 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 11:56:01.704665 3212985 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-118262" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 11:56:01.705100 3212985 kubeconfig.go:62] /home/jenkins/minikube-integration/22182-2922712/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-118262" cluster setting kubeconfig missing "no-preload-118262" context setting]
	I1217 11:56:01.705938 3212985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:01.707388 3212985 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 11:56:01.717641 3212985 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1217 11:56:01.717678 3212985 kubeadm.go:602] duration metric: took 21.41966ms to restartPrimaryControlPlane
	I1217 11:56:01.717689 3212985 kubeadm.go:403] duration metric: took 57.040291ms to StartCluster
	I1217 11:56:01.717705 3212985 settings.go:142] acquiring lock: {Name:mkbe14d68dd5ff3fa1749157c50305d115f5a24d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:01.717769 3212985 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 11:56:01.718373 3212985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:01.718582 3212985 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 11:56:01.718926 3212985 config.go:182] Loaded profile config "no-preload-118262": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:56:01.718998 3212985 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 11:56:01.719125 3212985 addons.go:70] Setting storage-provisioner=true in profile "no-preload-118262"
	I1217 11:56:01.719146 3212985 addons.go:239] Setting addon storage-provisioner=true in "no-preload-118262"
	I1217 11:56:01.719168 3212985 host.go:66] Checking if "no-preload-118262" exists ...
	I1217 11:56:01.719170 3212985 addons.go:70] Setting dashboard=true in profile "no-preload-118262"
	I1217 11:56:01.719228 3212985 addons.go:239] Setting addon dashboard=true in "no-preload-118262"
	W1217 11:56:01.719262 3212985 addons.go:248] addon dashboard should already be in state true
	I1217 11:56:01.719308 3212985 host.go:66] Checking if "no-preload-118262" exists ...
	I1217 11:56:01.719638 3212985 cli_runner.go:164] Run: docker container inspect no-preload-118262 --format={{.State.Status}}
	I1217 11:56:01.719916 3212985 cli_runner.go:164] Run: docker container inspect no-preload-118262 --format={{.State.Status}}
	I1217 11:56:01.720320 3212985 addons.go:70] Setting default-storageclass=true in profile "no-preload-118262"
	I1217 11:56:01.720337 3212985 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-118262"
	I1217 11:56:01.720702 3212985 cli_runner.go:164] Run: docker container inspect no-preload-118262 --format={{.State.Status}}
	I1217 11:56:01.724486 3212985 out.go:179] * Verifying Kubernetes components...
	I1217 11:56:01.727633 3212985 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:56:01.766705 3212985 addons.go:239] Setting addon default-storageclass=true in "no-preload-118262"
	I1217 11:56:01.766751 3212985 host.go:66] Checking if "no-preload-118262" exists ...
	I1217 11:56:01.767177 3212985 cli_runner.go:164] Run: docker container inspect no-preload-118262 --format={{.State.Status}}
	I1217 11:56:01.793928 3212985 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:56:01.799560 3212985 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:56:01.799586 3212985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 11:56:01.799655 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:56:01.806809 3212985 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 11:56:01.806838 3212985 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 11:56:01.806902 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:56:01.809039 3212985 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 11:56:01.812535 3212985 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 11:56:01.817510 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 11:56:01.817535 3212985 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 11:56:01.817604 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:56:01.867768 3212985 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36048 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:56:01.868081 3212985 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36048 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:56:01.868642 3212985 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36048 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:56:01.951610 3212985 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:56:02.024186 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 11:56:02.024265 3212985 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 11:56:02.043227 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 11:56:02.043295 3212985 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 11:56:02.048999 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 11:56:02.054810 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:56:02.086887 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 11:56:02.086960 3212985 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 11:56:02.105255 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 11:56:02.105288 3212985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 11:56:02.121678 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 11:56:02.121719 3212985 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 11:56:02.137737 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 11:56:02.137779 3212985 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 11:56:02.153356 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 11:56:02.153397 3212985 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 11:56:02.168513 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 11:56:02.168557 3212985 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 11:56:02.185798 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 11:56:02.185838 3212985 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 11:56:02.201465 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:02.758705 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:02.758748 3212985 retry.go:31] will retry after 229.540303ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:02.758805 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:02.758819 3212985 retry.go:31] will retry after 199.856736ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:02.759004 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:02.759019 3212985 retry.go:31] will retry after 172.784882ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:02.759090 3212985 node_ready.go:35] waiting up to 6m0s for node "no-preload-118262" to be "Ready" ...
	I1217 11:56:02.932840 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 11:56:02.959390 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:56:02.988844 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:03.015687 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:03.015717 3212985 retry.go:31] will retry after 427.179701ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:03.053926 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:03.053954 3212985 retry.go:31] will retry after 351.36ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:03.071903 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:03.071938 3212985 retry.go:31] will retry after 460.512525ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:03.405971 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:56:03.443451 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:03.475863 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:03.475958 3212985 retry.go:31] will retry after 760.184682ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:03.533075 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:03.533848 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:03.533930 3212985 retry.go:31] will retry after 500.153362ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:03.629508 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:03.629561 3212985 retry.go:31] will retry after 828.549967ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:04.034401 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:04.098672 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:04.098706 3212985 retry.go:31] will retry after 456.814782ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:04.236935 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:04.357588 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:04.357621 3212985 retry.go:31] will retry after 773.010299ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:04.458872 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:04.516437 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:04.516469 3212985 retry.go:31] will retry after 1.201644683s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:04.556582 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:04.622293 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:04.622328 3212985 retry.go:31] will retry after 1.824101164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:04.760127 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:05.131775 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:05.197068 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:05.197101 3212985 retry.go:31] will retry after 718.007742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:05.719362 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:05.829095 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:05.829128 3212985 retry.go:31] will retry after 1.266711526s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:05.915322 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:05.976930 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:05.976963 3212985 retry.go:31] will retry after 983.864547ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:06.446716 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:06.526752 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:06.526789 3212985 retry.go:31] will retry after 1.791049068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:06.962003 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:07.021949 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:07.021981 3212985 retry.go:31] will retry after 3.775428423s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:07.096119 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:07.154813 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:07.154841 3212985 retry.go:31] will retry after 1.6043331s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:07.261035 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:08.318583 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:08.381665 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:08.381708 3212985 retry.go:31] will retry after 3.517495633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:08.759662 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:08.864890 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:08.864925 3212985 retry.go:31] will retry after 2.28260361s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:09.760003 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:10.798319 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:10.860002 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:10.860034 3212985 retry.go:31] will retry after 4.82591476s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:11.148644 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:11.216089 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:11.216123 3212985 retry.go:31] will retry after 6.175133091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:11.900240 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:11.969428 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:11.969471 3212985 retry.go:31] will retry after 2.437731885s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:12.260387 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:14.408207 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:14.471530 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:14.471564 3212985 retry.go:31] will retry after 7.973001246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:14.760396 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:15.686226 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:15.751739 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:15.751823 3212985 retry.go:31] will retry after 4.990913672s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:16.760725 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:17.392069 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:17.450109 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:17.450199 3212985 retry.go:31] will retry after 4.605565076s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:19.260667 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:20.743360 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:20.828411 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:20.828465 3212985 retry.go:31] will retry after 11.110369506s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:21.759604 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:22.056015 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:22.115928 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:22.116008 3212985 retry.go:31] will retry after 10.310245173s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:22.444820 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:22.509039 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:22.509077 3212985 retry.go:31] will retry after 13.279816116s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:23.759723 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:26.259612 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:28.260333 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:30.759694 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:31.939110 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:32.013382 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:32.013417 3212985 retry.go:31] will retry after 17.792843999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:32.426534 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:32.489297 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:32.489332 3212985 retry.go:31] will retry after 10.214719089s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:32.760038 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:35.259670 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:35.789928 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:35.882559 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:35.882592 3212985 retry.go:31] will retry after 9.227629247s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:37.260542 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:39.759863 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:42.259856 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:42.704316 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:42.766302 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:42.766335 3212985 retry.go:31] will retry after 16.793347769s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:44.260613 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:45.111278 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:45.238863 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:45.238978 3212985 retry.go:31] will retry after 27.971446484s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:46.759704 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:48.760327 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:49.806921 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:49.869443 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:49.869476 3212985 retry.go:31] will retry after 26.53119581s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:50.760462 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:53.259758 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:55.759768 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:58.259631 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:59.560400 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:59.620061 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:59.620096 3212985 retry.go:31] will retry after 23.364320547s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:57:00.259931 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:02.761870 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:05.259592 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:07.759629 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:09.760369 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:12.259883 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:57:13.211445 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:57:13.333945 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:57:13.333983 3212985 retry.go:31] will retry after 44.1812533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:57:14.260040 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:16.260864 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:57:16.401385 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:57:16.464135 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:57:16.464167 3212985 retry.go:31] will retry after 45.341892172s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:57:18.759728 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:20.760439 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:57:22.985044 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:57:23.107103 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:57:23.107213 3212985 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1217 11:57:23.259704 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:25.759680 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:28.259589 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:30.759848 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:33.259631 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:35.259696 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:37.759650 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:39.759939 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:42.259840 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:44.759690 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:46.760542 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:49.259755 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:51.259894 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:53.259968 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:55.760675 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:57:57.516068 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:57:57.626784 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:57:57.626882 3212985 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1217 11:57:58.259617 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:00.259717 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:58:01.806452 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:58:01.869925 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:58:01.870041 3212985 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 11:58:01.873020 3212985 out.go:179] * Enabled addons: 
	I1217 11:58:01.875902 3212985 addons.go:530] duration metric: took 2m0.156897144s for enable addons: enabled=[]
	W1217 11:58:02.759596 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:04.759668 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:07.259570 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:09.259781 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:11.759720 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:14.259689 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:16.759733 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:19.259603 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:21.259694 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:23.759673 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:26.259638 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:28.259781 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:30.759896 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:33.259742 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:35.759679 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:38.259699 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:40.759816 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:43.259680 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:45.260027 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:47.759590 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:49.759638 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:52.259559 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:54.260573 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:56.759512 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:59.259711 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:01.759695 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:03.759859 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:06.259746 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:08.759609 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:10.760559 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:13.259644 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:15.259686 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:17.259753 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:19.260149 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:21.759780 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:24.260629 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:26.759690 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:28.759892 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:31.260312 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:33.759633 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:35.759926 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:37.760524 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:40.259605 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:42.259707 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:44.260580 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:46.759769 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:49.260572 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:51.760687 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:54.259612 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:56.259701 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:58.759564 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:00.765674 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:03.260705 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:05.759690 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:08.259770 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:10.759636 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:12.759685 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:14.760245 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:17.259662 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:19.260014 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:21.760329 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:24.260230 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:26.260366 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:28.759678 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:31.259726 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:33.759557 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:35.759737 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:37.760125 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:40.259692 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:42.759670 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:44.760258 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:46.760539 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:49.260592 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:51.760613 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:54.260476 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:56.760549 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:59.260028 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:01.761273 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:04.259626 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:06.260640 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:08.759809 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:11.259781 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:13.760361 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:16.260406 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:18.759622 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:20.760668 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:23.259693 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:25.259753 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:27.759647 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:29.760524 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:32.259673 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:34.259722 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:36.759694 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:39.260642 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:41.759666 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:44.259623 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:46.260805 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:48.760674 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:51.260164 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:53.759783 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:56.259727 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:58.760523 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:02:01.260216 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:02:02.764189 3212985 node_ready.go:38] duration metric: took 6m0.005070756s for node "no-preload-118262" to be "Ready" ...
	I1217 12:02:02.767452 3212985 out.go:203] 
	W1217 12:02:02.770608 3212985 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 12:02:02.770638 3212985 out.go:285] * 
	* 
	W1217 12:02:02.772986 3212985 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 12:02:02.776078 3212985 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p no-preload-118262 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-118262
helpers_test.go:244: (dbg) docker inspect no-preload-118262:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362",
	        "Created": "2025-12-17T11:45:23.889791979Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3213113,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:55:54.36927291Z",
	            "FinishedAt": "2025-12-17T11:55:53.009633374Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/hostname",
	        "HostsPath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/hosts",
	        "LogPath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362-json.log",
	        "Name": "/no-preload-118262",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-118262:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-118262",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362",
	                "LowerDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82-init/diff:/var/lib/docker/overlay2/aa1c3cb837db05afa9c265c464cc269fa9c11658f422c1c8858e1287ac952f12/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-118262",
	                "Source": "/var/lib/docker/volumes/no-preload-118262/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-118262",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-118262",
	                "name.minikube.sigs.k8s.io": "no-preload-118262",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a5bb1af38cbf7e52f627da4de2cc21445576f9ee9ac16469472822e1e4e3c56f",
	            "SandboxKey": "/var/run/docker/netns/a5bb1af38cbf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36048"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36049"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36052"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36051"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-118262": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:fb:41:14:2f:52",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3227851744df2bdac9c367dc789ddfe2892f877b7b9b947cdcd81cb2897c4ba1",
	                    "EndpointID": "c35288f197473390678d887f2fedc1b13457164e1aa2e715d8bd350b76e059bf",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-118262",
	                        "4578079103f7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-118262 -n no-preload-118262
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-118262 -n no-preload-118262: exit status 2 (352.601355ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-118262 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ image   │ embed-certs-628462 image list --format=json                                                                                                                                                                                                              │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ pause   │ -p embed-certs-628462 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ unpause │ -p embed-certs-628462 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p embed-certs-628462                                                                                                                                                                                                                                    │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p embed-certs-628462                                                                                                                                                                                                                                    │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p disable-driver-mounts-003095                                                                                                                                                                                                                          │ disable-driver-mounts-003095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ start   │ -p default-k8s-diff-port-224095 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:49 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-224095 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ stop    │ -p default-k8s-diff-port-224095 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-224095 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ start   │ -p default-k8s-diff-port-224095 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:50 UTC │
	│ image   │ default-k8s-diff-port-224095 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ pause   │ -p default-k8s-diff-port-224095 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ unpause │ -p default-k8s-diff-port-224095 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ delete  │ -p default-k8s-diff-port-224095                                                                                                                                                                                                                          │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ delete  │ -p default-k8s-diff-port-224095                                                                                                                                                                                                                          │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ start   │ -p newest-cni-669680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-118262 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	│ stop    │ -p no-preload-118262 --alsologtostderr -v=3                                                                                                                                                                                                              │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ addons  │ enable dashboard -p no-preload-118262 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ start   │ -p no-preload-118262 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-669680 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 11:58 UTC │                     │
	│ stop    │ -p newest-cni-669680 --alsologtostderr -v=3                                                                                                                                                                                                              │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 12:00 UTC │ 17 Dec 25 12:00 UTC │
	│ addons  │ enable dashboard -p newest-cni-669680 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 12:00 UTC │ 17 Dec 25 12:00 UTC │
	│ start   │ -p newest-cni-669680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 12:00 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 12:00:44
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 12:00:44.347526 3219848 out.go:360] Setting OutFile to fd 1 ...
	I1217 12:00:44.347663 3219848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:00:44.347673 3219848 out.go:374] Setting ErrFile to fd 2...
	I1217 12:00:44.347678 3219848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:00:44.347938 3219848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 12:00:44.348321 3219848 out.go:368] Setting JSON to false
	I1217 12:00:44.349222 3219848 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":63795,"bootTime":1765909050,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 12:00:44.349300 3219848 start.go:143] virtualization:  
	I1217 12:00:44.352466 3219848 out.go:179] * [newest-cni-669680] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 12:00:44.356190 3219848 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 12:00:44.356282 3219848 notify.go:221] Checking for updates...
	I1217 12:00:44.362135 3219848 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 12:00:44.365177 3219848 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 12:00:44.368881 3219848 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 12:00:44.372015 3219848 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 12:00:44.375014 3219848 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 12:00:44.378336 3219848 config.go:182] Loaded profile config "newest-cni-669680": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 12:00:44.378951 3219848 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 12:00:44.413369 3219848 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 12:00:44.413513 3219848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 12:00:44.473970 3219848 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 12:00:44.464532408 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 12:00:44.474081 3219848 docker.go:319] overlay module found
	I1217 12:00:44.477205 3219848 out.go:179] * Using the docker driver based on existing profile
	I1217 12:00:44.480155 3219848 start.go:309] selected driver: docker
	I1217 12:00:44.480182 3219848 start.go:927] validating driver "docker" against &{Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:00:44.480300 3219848 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 12:00:44.481122 3219848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 12:00:44.568687 3219848 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 12:00:44.559079636 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 12:00:44.569054 3219848 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 12:00:44.569088 3219848 cni.go:84] Creating CNI manager for ""
	I1217 12:00:44.569145 3219848 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 12:00:44.569196 3219848 start.go:353] cluster config:
	{Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:00:44.574245 3219848 out.go:179] * Starting "newest-cni-669680" primary control-plane node in "newest-cni-669680" cluster
	I1217 12:00:44.576964 3219848 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 12:00:44.579814 3219848 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 12:00:44.582545 3219848 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 12:00:44.582593 3219848 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1217 12:00:44.582604 3219848 cache.go:65] Caching tarball of preloaded images
	I1217 12:00:44.582624 3219848 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 12:00:44.582700 3219848 preload.go:238] Found /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 12:00:44.582711 3219848 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1217 12:00:44.582826 3219848 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/config.json ...
	I1217 12:00:44.602190 3219848 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 12:00:44.602216 3219848 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 12:00:44.602262 3219848 cache.go:243] Successfully downloaded all kic artifacts
	I1217 12:00:44.602326 3219848 start.go:360] acquireMachinesLock for newest-cni-669680: {Name:mk48c8383b245a4b70f2208fe2e76b80693bbb09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 12:00:44.602428 3219848 start.go:364] duration metric: took 68.29µs to acquireMachinesLock for "newest-cni-669680"
	I1217 12:00:44.602457 3219848 start.go:96] Skipping create...Using existing machine configuration
	I1217 12:00:44.602505 3219848 fix.go:54] fixHost starting: 
	I1217 12:00:44.602917 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:44.620734 3219848 fix.go:112] recreateIfNeeded on newest-cni-669680: state=Stopped err=<nil>
	W1217 12:00:44.620765 3219848 fix.go:138] unexpected machine state, will restart: <nil>
	W1217 12:00:44.760258 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:46.760539 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:00:44.623987 3219848 out.go:252] * Restarting existing docker container for "newest-cni-669680" ...
	I1217 12:00:44.624072 3219848 cli_runner.go:164] Run: docker start newest-cni-669680
	I1217 12:00:44.870900 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:44.893559 3219848 kic.go:432] container "newest-cni-669680" state is running.
	I1217 12:00:44.894282 3219848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 12:00:44.917205 3219848 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/config.json ...
	I1217 12:00:44.917570 3219848 machine.go:94] provisionDockerMachine start ...
	I1217 12:00:44.917645 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:44.945980 3219848 main.go:143] libmachine: Using SSH client type: native
	I1217 12:00:44.946096 3219848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36053 <nil> <nil>}
	I1217 12:00:44.946104 3219848 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 12:00:44.946864 3219848 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 12:00:48.084367 3219848 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-669680
	
	I1217 12:00:48.084399 3219848 ubuntu.go:182] provisioning hostname "newest-cni-669680"
	I1217 12:00:48.084507 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:48.104367 3219848 main.go:143] libmachine: Using SSH client type: native
	I1217 12:00:48.104656 3219848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36053 <nil> <nil>}
	I1217 12:00:48.104680 3219848 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-669680 && echo "newest-cni-669680" | sudo tee /etc/hostname
	I1217 12:00:48.247265 3219848 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-669680
	
	I1217 12:00:48.247353 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:48.270652 3219848 main.go:143] libmachine: Using SSH client type: native
	I1217 12:00:48.270788 3219848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36053 <nil> <nil>}
	I1217 12:00:48.270817 3219848 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-669680' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-669680/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-669680' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 12:00:48.417473 3219848 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 12:00:48.417557 3219848 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 12:00:48.417596 3219848 ubuntu.go:190] setting up certificates
	I1217 12:00:48.417639 3219848 provision.go:84] configureAuth start
	I1217 12:00:48.417749 3219848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 12:00:48.437471 3219848 provision.go:143] copyHostCerts
	I1217 12:00:48.437568 3219848 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 12:00:48.437587 3219848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 12:00:48.437717 3219848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 12:00:48.437858 3219848 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 12:00:48.437877 3219848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 12:00:48.437916 3219848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 12:00:48.438005 3219848 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 12:00:48.438028 3219848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 12:00:48.438055 3219848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 12:00:48.438157 3219848 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.newest-cni-669680 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-669680]
	I1217 12:00:48.577436 3219848 provision.go:177] copyRemoteCerts
	I1217 12:00:48.577506 3219848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 12:00:48.577546 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:48.595338 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:48.692538 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 12:00:48.711734 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 12:00:48.729881 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 12:00:48.748237 3219848 provision.go:87] duration metric: took 330.555362ms to configureAuth
	I1217 12:00:48.748262 3219848 ubuntu.go:206] setting minikube options for container-runtime
	I1217 12:00:48.748550 3219848 config.go:182] Loaded profile config "newest-cni-669680": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 12:00:48.748561 3219848 machine.go:97] duration metric: took 3.830976751s to provisionDockerMachine
	I1217 12:00:48.748569 3219848 start.go:293] postStartSetup for "newest-cni-669680" (driver="docker")
	I1217 12:00:48.748581 3219848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 12:00:48.748643 3219848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 12:00:48.748683 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:48.766578 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:48.864654 3219848 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 12:00:48.868220 3219848 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 12:00:48.868249 3219848 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 12:00:48.868261 3219848 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 12:00:48.868318 3219848 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 12:00:48.868401 3219848 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 12:00:48.868523 3219848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 12:00:48.876210 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 12:00:48.894408 3219848 start.go:296] duration metric: took 145.823675ms for postStartSetup
	I1217 12:00:48.894507 3219848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 12:00:48.894563 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:48.913872 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:49.010734 3219848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 12:00:49.017136 3219848 fix.go:56] duration metric: took 4.414624566s for fixHost
	I1217 12:00:49.017182 3219848 start.go:83] releasing machines lock for "newest-cni-669680", held for 4.414721098s
	I1217 12:00:49.017319 3219848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 12:00:49.041576 3219848 ssh_runner.go:195] Run: cat /version.json
	I1217 12:00:49.041642 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:49.041898 3219848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 12:00:49.041972 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:49.071567 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:49.072178 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:49.261249 3219848 ssh_runner.go:195] Run: systemctl --version
	I1217 12:00:49.267897 3219848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 12:00:49.272503 3219848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 12:00:49.272574 3219848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 12:00:49.280715 3219848 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 12:00:49.280743 3219848 start.go:496] detecting cgroup driver to use...
	I1217 12:00:49.280787 3219848 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 12:00:49.280844 3219848 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 12:00:49.298858 3219848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 12:00:49.313120 3219848 docker.go:218] disabling cri-docker service (if available) ...
	I1217 12:00:49.313230 3219848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 12:00:49.329245 3219848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 12:00:49.342531 3219848 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 12:00:49.461223 3219848 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 12:00:49.579409 3219848 docker.go:234] disabling docker service ...
	I1217 12:00:49.579510 3219848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 12:00:49.594800 3219848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 12:00:49.608313 3219848 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 12:00:49.737460 3219848 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 12:00:49.883222 3219848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 12:00:49.897339 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 12:00:49.911914 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 12:00:49.921268 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 12:00:49.930257 3219848 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 12:00:49.930398 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 12:00:49.939639 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 12:00:49.948689 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 12:00:49.958342 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 12:00:49.967395 3219848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 12:00:49.975730 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 12:00:49.984582 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 12:00:49.993553 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 12:00:50.009983 3219848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 12:00:50.019753 3219848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 12:00:50.028837 3219848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:00:50.142686 3219848 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 12:00:50.264183 3219848 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 12:00:50.264308 3219848 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 12:00:50.268160 3219848 start.go:564] Will wait 60s for crictl version
	I1217 12:00:50.268261 3219848 ssh_runner.go:195] Run: which crictl
	I1217 12:00:50.271790 3219848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 12:00:50.298148 3219848 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 12:00:50.298258 3219848 ssh_runner.go:195] Run: containerd --version
	I1217 12:00:50.318643 3219848 ssh_runner.go:195] Run: containerd --version
	I1217 12:00:50.346609 3219848 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1217 12:00:50.349545 3219848 cli_runner.go:164] Run: docker network inspect newest-cni-669680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 12:00:50.366603 3219848 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 12:00:50.370482 3219848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 12:00:50.383622 3219848 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 12:00:50.386526 3219848 kubeadm.go:884] updating cluster {Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 12:00:50.386672 3219848 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 12:00:50.386774 3219848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:00:50.415106 3219848 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 12:00:50.415132 3219848 containerd.go:534] Images already preloaded, skipping extraction
	I1217 12:00:50.415224 3219848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:00:50.444492 3219848 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 12:00:50.444517 3219848 cache_images.go:86] Images are preloaded, skipping loading
	I1217 12:00:50.444526 3219848 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1217 12:00:50.444639 3219848 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-669680 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 12:00:50.444718 3219848 ssh_runner.go:195] Run: sudo crictl info
	I1217 12:00:50.471453 3219848 cni.go:84] Creating CNI manager for ""
	I1217 12:00:50.471478 3219848 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 12:00:50.471497 3219848 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 12:00:50.471553 3219848 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-669680 NodeName:newest-cni-669680 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 12:00:50.471711 3219848 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-669680"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 12:00:50.471828 3219848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 12:00:50.480867 3219848 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 12:00:50.480998 3219848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 12:00:50.488686 3219848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1217 12:00:50.504356 3219848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 12:00:50.520176 3219848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1217 12:00:50.535930 3219848 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 12:00:50.540134 3219848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 12:00:50.550629 3219848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:00:50.669384 3219848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 12:00:50.685420 3219848 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680 for IP: 192.168.76.2
	I1217 12:00:50.685479 3219848 certs.go:195] generating shared ca certs ...
	I1217 12:00:50.685497 3219848 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:00:50.685634 3219848 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 12:00:50.685683 3219848 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 12:00:50.685690 3219848 certs.go:257] generating profile certs ...
	I1217 12:00:50.685787 3219848 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.key
	I1217 12:00:50.685851 3219848 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key.e7646161
	I1217 12:00:50.685893 3219848 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key
	I1217 12:00:50.686084 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 12:00:50.686149 3219848 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 12:00:50.686177 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 12:00:50.686225 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 12:00:50.686286 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 12:00:50.686340 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 12:00:50.686422 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 12:00:50.687047 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 12:00:50.710384 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 12:00:50.730920 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 12:00:50.751265 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 12:00:50.772018 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 12:00:50.790833 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 12:00:50.810114 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 12:00:50.828402 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 12:00:50.846753 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 12:00:50.865705 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 12:00:50.886567 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 12:00:50.904533 3219848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 12:00:50.917457 3219848 ssh_runner.go:195] Run: openssl version
	I1217 12:00:50.923993 3219848 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:00:50.931839 3219848 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 12:00:50.939507 3219848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:00:50.943237 3219848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:00:50.943304 3219848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:00:50.984637 3219848 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 12:00:50.992168 3219848 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 12:00:50.999795 3219848 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 12:00:51.020372 3219848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 12:00:51.024379 3219848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 12:00:51.024566 3219848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 12:00:51.066006 3219848 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 12:00:51.074211 3219848 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 12:00:51.082049 3219848 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 12:00:51.090651 3219848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 12:00:51.094888 3219848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 12:00:51.095004 3219848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 12:00:51.137313 3219848 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 12:00:51.145186 3219848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 12:00:51.149385 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 12:00:51.191456 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 12:00:51.232840 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 12:00:51.275219 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 12:00:51.317313 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 12:00:51.358746 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 12:00:51.399851 3219848 kubeadm.go:401] StartCluster: {Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:00:51.399946 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 12:00:51.400058 3219848 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 12:00:51.427405 3219848 cri.go:89] found id: ""
	I1217 12:00:51.427480 3219848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 12:00:51.435564 3219848 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 12:00:51.435593 3219848 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 12:00:51.435648 3219848 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 12:00:51.443379 3219848 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 12:00:51.443986 3219848 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-669680" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 12:00:51.444236 3219848 kubeconfig.go:62] /home/jenkins/minikube-integration/22182-2922712/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-669680" cluster setting kubeconfig missing "newest-cni-669680" context setting]
	I1217 12:00:51.444696 3219848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:00:51.446096 3219848 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 12:00:51.454141 3219848 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1217 12:00:51.454214 3219848 kubeadm.go:602] duration metric: took 18.613293ms to restartPrimaryControlPlane
	I1217 12:00:51.454230 3219848 kubeadm.go:403] duration metric: took 54.392206ms to StartCluster
	I1217 12:00:51.454245 3219848 settings.go:142] acquiring lock: {Name:mkbe14d68dd5ff3fa1749157c50305d115f5a24d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:00:51.454304 3219848 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 12:00:51.455245 3219848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:00:51.455481 3219848 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 12:00:51.455797 3219848 config.go:182] Loaded profile config "newest-cni-669680": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 12:00:51.455846 3219848 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 12:00:51.455911 3219848 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-669680"
	I1217 12:00:51.455924 3219848 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-669680"
	I1217 12:00:51.455953 3219848 host.go:66] Checking if "newest-cni-669680" exists ...
	I1217 12:00:51.456410 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:51.456591 3219848 addons.go:70] Setting dashboard=true in profile "newest-cni-669680"
	I1217 12:00:51.457002 3219848 addons.go:239] Setting addon dashboard=true in "newest-cni-669680"
	W1217 12:00:51.457012 3219848 addons.go:248] addon dashboard should already be in state true
	I1217 12:00:51.457034 3219848 host.go:66] Checking if "newest-cni-669680" exists ...
	I1217 12:00:51.457458 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:51.456605 3219848 addons.go:70] Setting default-storageclass=true in profile "newest-cni-669680"
	I1217 12:00:51.458033 3219848 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-669680"
	I1217 12:00:51.458306 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:51.460659 3219848 out.go:179] * Verifying Kubernetes components...
	I1217 12:00:51.463611 3219848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:00:51.495379 3219848 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 12:00:51.502753 3219848 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:00:51.502777 3219848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 12:00:51.502845 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:51.511997 3219848 addons.go:239] Setting addon default-storageclass=true in "newest-cni-669680"
	I1217 12:00:51.512038 3219848 host.go:66] Checking if "newest-cni-669680" exists ...
	I1217 12:00:51.512543 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:51.527586 3219848 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 12:00:51.536600 3219848 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1217 12:00:49.260592 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:51.760613 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:00:51.539513 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 12:00:51.539539 3219848 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 12:00:51.539612 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:51.555471 3219848 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 12:00:51.555502 3219848 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 12:00:51.555570 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:51.569622 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:51.592016 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:51.601832 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:51.689678 3219848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 12:00:51.731294 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:00:51.749491 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:00:51.814469 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 12:00:51.814496 3219848 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 12:00:51.839602 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 12:00:51.839672 3219848 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 12:00:51.852764 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 12:00:51.852827 3219848 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 12:00:51.865089 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 12:00:51.865152 3219848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 12:00:51.878190 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 12:00:51.878259 3219848 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 12:00:51.890831 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 12:00:51.890854 3219848 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 12:00:51.903270 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 12:00:51.903294 3219848 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 12:00:51.916127 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 12:00:51.916153 3219848 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 12:00:51.929059 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 12:00:51.929123 3219848 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 12:00:51.942273 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:52.502896 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.502968 3219848 retry.go:31] will retry after 269.884821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:00:52.503026 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.503067 3219848 retry.go:31] will retry after 319.702383ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.503040 3219848 api_server.go:52] waiting for apiserver process to appear ...
	I1217 12:00:52.503258 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:00:52.503300 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.503321 3219848 retry.go:31] will retry after 196.810414ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.700893 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:52.770562 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.770599 3219848 retry.go:31] will retry after 481.518663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.773838 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:00:52.823221 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:00:52.855276 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.855328 3219848 retry.go:31] will retry after 391.667259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:00:52.894877 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.894917 3219848 retry.go:31] will retry after 200.928151ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.004579 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:53.096394 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:00:53.155868 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.155897 3219848 retry.go:31] will retry after 564.238822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.248228 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:00:53.253066 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:53.368787 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.368822 3219848 retry.go:31] will retry after 377.070742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:00:53.369052 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.369071 3219848 retry.go:31] will retry after 485.691157ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.504052 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:53.720468 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:00:53.746162 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:00:53.794993 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.795027 3219848 retry.go:31] will retry after 872.052872ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:00:53.811480 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.811533 3219848 retry.go:31] will retry after 558.92589ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.855758 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:53.922708 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.922745 3219848 retry.go:31] will retry after 803.451465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:54.003704 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:00:54.260476 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:56.760549 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:00:54.370776 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:00:54.437621 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:54.437652 3219848 retry.go:31] will retry after 1.190014231s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:54.503835 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:54.667963 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:00:54.726498 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:54.728210 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:54.728287 3219848 retry.go:31] will retry after 1.413986656s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:00:54.813279 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:54.813372 3219848 retry.go:31] will retry after 1.840693776s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:55.005986 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:55.504112 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:55.628242 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:00:55.689054 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:55.689136 3219848 retry.go:31] will retry after 1.799425819s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:56.003624 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:56.142943 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:00:56.205592 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:56.205625 3219848 retry.go:31] will retry after 2.655712888s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:56.503981 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:56.654730 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:56.717604 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:56.717641 3219848 retry.go:31] will retry after 1.909418395s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:57.004223 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:57.489437 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:00:57.503984 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:00:57.562808 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:57.562840 3219848 retry.go:31] will retry after 3.72719526s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:58.014740 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:58.503409 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:58.627253 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:58.690443 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:58.690481 3219848 retry.go:31] will retry after 3.549926007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:58.861704 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:00:58.923654 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:58.923683 3219848 retry.go:31] will retry after 2.058003245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:59.003967 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:00:59.260028 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:01.761273 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:00:59.504167 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:00.018808 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:00.504031 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:00.982724 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:01:01.004335 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:01.111365 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:01.111399 3219848 retry.go:31] will retry after 3.900095446s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:01.291002 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:01:01.368946 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:01.368996 3219848 retry.go:31] will retry after 3.675584678s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:01.503381 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:02.004403 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:02.241403 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:01:02.307939 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:02.307978 3219848 retry.go:31] will retry after 5.738469139s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:02.504084 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:03.003562 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:03.503472 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:04.005140 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:04.259626 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:06.260640 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:08.759809 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:04.503830 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:05.003702 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:05.012660 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:01:05.045335 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:01:05.083423 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:05.083461 3219848 retry.go:31] will retry after 9.235586003s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:01:05.118369 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:05.118401 3219848 retry.go:31] will retry after 3.828272571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:05.503857 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:06.003637 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:06.504078 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:07.003401 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:07.503344 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:08.004170 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:08.047658 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:01:08.113675 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:08.113710 3219848 retry.go:31] will retry after 7.390134832s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:08.504355 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:08.946950 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:01:09.003509 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:09.011595 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:09.011629 3219848 retry.go:31] will retry after 14.170665244s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:01:11.259781 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:13.760361 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:09.503956 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:10.018957 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:10.503456 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:11.004169 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:11.503808 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:12.003522 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:12.503603 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:13.003862 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:13.503472 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:14.004363 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:14.319308 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:01:16.260406 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:18.759622 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:14.385208 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:14.385243 3219848 retry.go:31] will retry after 5.459360953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:14.503378 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:15.006355 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:15.504086 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:15.504108 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:01:15.572879 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:15.572915 3219848 retry.go:31] will retry after 11.777794795s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:16.005530 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:16.503503 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:17.003649 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:17.503430 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:18.005004 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:18.504088 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:19.003423 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:20.760668 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:23.259693 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:19.503667 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:19.845708 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:01:19.909350 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:19.909381 3219848 retry.go:31] will retry after 9.722081791s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:20.003736 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:20.503967 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:21.004457 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:21.504148 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:22.003426 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:22.504235 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:23.004166 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:23.183313 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:01:23.244255 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:23.244289 3219848 retry.go:31] will retry after 19.619062537s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:23.503427 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:24.006966 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:25.259753 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:27.759647 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:24.503758 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:25.004125 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:25.503463 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:26.004155 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:26.504576 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:27.003556 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:27.351598 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:01:27.419162 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:27.419195 3219848 retry.go:31] will retry after 15.164194741s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:27.503619 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:28.003385 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:28.503474 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:29.004314 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:29.760524 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:32.259673 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:29.503968 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:29.632290 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:01:29.699987 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:29.700018 3219848 retry.go:31] will retry after 12.658501476s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:30.003430 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:30.503407 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:31.003818 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:31.504094 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:32.003845 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:32.503410 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:33.005413 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:33.503962 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:34.003405 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:34.259722 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:36.759694 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:34.503770 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:35.004969 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:35.504211 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:36.003492 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:36.503881 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:37.008063 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:37.504267 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:38.004154 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:38.504195 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:39.005022 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:39.260642 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:41.759666 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:39.504074 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:40.009459 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:40.504054 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:41.004134 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:41.504134 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:42.003867 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:42.359033 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:01:42.424319 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:42.424350 3219848 retry.go:31] will retry after 39.499798177s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:42.503565 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:42.584549 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:01:42.654579 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:42.654612 3219848 retry.go:31] will retry after 22.182784721s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:42.864124 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:01:42.925874 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:42.925916 3219848 retry.go:31] will retry after 18.241160237s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:43.004102 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:43.504356 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:44.004028 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:44.259623 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:46.260805 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:48.760674 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:44.503929 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:45.003640 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:45.503747 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:46.003443 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:46.503967 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:47.003372 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:47.503601 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:48.003536 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:48.503987 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:49.003434 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:51.260164 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:53.759783 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:49.504162 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:50.003493 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:50.503875 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:51.004324 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:51.503888 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:01:51.503983 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:01:51.536666 3219848 cri.go:89] found id: ""
	I1217 12:01:51.536689 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.536698 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:01:51.536704 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:01:51.536768 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:01:51.562047 3219848 cri.go:89] found id: ""
	I1217 12:01:51.562070 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.562078 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:01:51.562084 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:01:51.562149 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:01:51.586286 3219848 cri.go:89] found id: ""
	I1217 12:01:51.586309 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.586317 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:01:51.586323 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:01:51.586381 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:01:51.611834 3219848 cri.go:89] found id: ""
	I1217 12:01:51.611858 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.611867 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:01:51.611873 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:01:51.611942 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:01:51.637620 3219848 cri.go:89] found id: ""
	I1217 12:01:51.637643 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.637651 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:01:51.637658 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:01:51.637715 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:01:51.663176 3219848 cri.go:89] found id: ""
	I1217 12:01:51.663198 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.663206 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:01:51.663212 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:01:51.663273 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:01:51.688038 3219848 cri.go:89] found id: ""
	I1217 12:01:51.688064 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.688083 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:01:51.688090 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:01:51.688159 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:01:51.715834 3219848 cri.go:89] found id: ""
	I1217 12:01:51.715860 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.715870 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:01:51.715879 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:01:51.715890 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:01:51.772533 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:01:51.772567 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:01:51.788370 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:01:51.788400 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:01:51.855552 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:01:51.847275    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.848081    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.849574    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.849998    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.851493    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:01:51.847275    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.848081    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.849574    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.849998    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.851493    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:01:51.855615 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:01:51.855635 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:01:51.880660 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:01:51.880693 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 12:01:56.259727 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:58.760523 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:54.414807 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:54.425488 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:01:54.425558 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:01:54.453841 3219848 cri.go:89] found id: ""
	I1217 12:01:54.453870 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.453880 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:01:54.453887 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:01:54.453946 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:01:54.478957 3219848 cri.go:89] found id: ""
	I1217 12:01:54.478982 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.478991 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:01:54.478998 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:01:54.479060 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:01:54.504488 3219848 cri.go:89] found id: ""
	I1217 12:01:54.504516 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.504535 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:01:54.504543 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:01:54.504606 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:01:54.529418 3219848 cri.go:89] found id: ""
	I1217 12:01:54.529445 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.529454 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:01:54.529460 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:01:54.529519 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:01:54.557757 3219848 cri.go:89] found id: ""
	I1217 12:01:54.557781 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.557790 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:01:54.557797 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:01:54.557854 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:01:54.586961 3219848 cri.go:89] found id: ""
	I1217 12:01:54.586996 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.587004 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:01:54.587011 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:01:54.587077 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:01:54.612590 3219848 cri.go:89] found id: ""
	I1217 12:01:54.612617 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.612626 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:01:54.612633 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:01:54.612694 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:01:54.638207 3219848 cri.go:89] found id: ""
	I1217 12:01:54.638234 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.638243 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:01:54.638253 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:01:54.638264 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:01:54.695917 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:01:54.695955 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:01:54.712729 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:01:54.712759 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:01:54.782298 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:01:54.774102    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.774684    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.776463    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.776850    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.778510    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:01:54.774102    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.774684    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.776463    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.776850    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.778510    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:01:54.782321 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:01:54.782333 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:01:54.807165 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:01:54.807196 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:01:57.336099 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:57.346978 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:01:57.347048 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:01:57.371132 3219848 cri.go:89] found id: ""
	I1217 12:01:57.371155 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.371163 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:01:57.371169 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:01:57.371232 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:01:57.396905 3219848 cri.go:89] found id: ""
	I1217 12:01:57.396933 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.396942 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:01:57.396948 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:01:57.397011 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:01:57.425337 3219848 cri.go:89] found id: ""
	I1217 12:01:57.425366 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.425374 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:01:57.425381 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:01:57.425440 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:01:57.449681 3219848 cri.go:89] found id: ""
	I1217 12:01:57.449709 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.449718 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:01:57.449725 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:01:57.449784 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:01:57.475302 3219848 cri.go:89] found id: ""
	I1217 12:01:57.475328 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.475337 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:01:57.475343 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:01:57.475412 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:01:57.500270 3219848 cri.go:89] found id: ""
	I1217 12:01:57.500344 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.500369 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:01:57.500389 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:01:57.500509 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:01:57.527492 3219848 cri.go:89] found id: ""
	I1217 12:01:57.527519 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.527532 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:01:57.527538 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:01:57.527650 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:01:57.553482 3219848 cri.go:89] found id: ""
	I1217 12:01:57.553549 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.553576 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:01:57.553602 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:01:57.553627 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:01:57.609257 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:01:57.609292 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:01:57.625325 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:01:57.625352 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:01:57.691022 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:01:57.682604    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.683106    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.684793    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.685506    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.687043    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:01:57.682604    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.683106    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.684793    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.685506    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.687043    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:01:57.691048 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:01:57.691061 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:01:57.716301 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:01:57.716333 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 12:02:01.260216 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:02:02.764189 3212985 node_ready.go:38] duration metric: took 6m0.005070756s for node "no-preload-118262" to be "Ready" ...
	I1217 12:02:02.767452 3212985 out.go:203] 
	W1217 12:02:02.770608 3212985 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 12:02:02.770638 3212985 out.go:285] * 
	W1217 12:02:02.772986 3212985 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 12:02:02.776078 3212985 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511207273Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511268646Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511382582Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511463852Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511528459Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511597192Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511654275Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511737137Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511807372Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511906391Z" level=info msg="Connect containerd service"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.512274624Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.513135250Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.526293232Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.526625018Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.526753222Z" level=info msg="Start subscribing containerd event"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.526875034Z" level=info msg="Start recovering state"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.563780213Z" level=info msg="Start event monitor"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.563957803Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.564027291Z" level=info msg="Start streaming server"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.564090232Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.564145632Z" level=info msg="runtime interface starting up..."
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.564203560Z" level=info msg="starting plugins..."
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.564286234Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 11:56:00 no-preload-118262 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.567526269Z" level=info msg="containerd successfully booted in 0.088039s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:04.111906    3953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:04.112958    3953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:04.114809    3953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:04.115518    3953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:04.117351    3953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +26.532481] overlayfs: idmapped layers are currently not supported
	[Dec17 09:26] overlayfs: idmapped layers are currently not supported
	[Dec17 09:27] overlayfs: idmapped layers are currently not supported
	[Dec17 09:29] overlayfs: idmapped layers are currently not supported
	[Dec17 09:31] overlayfs: idmapped layers are currently not supported
	[Dec17 09:41] overlayfs: idmapped layers are currently not supported
	[Dec17 09:43] overlayfs: idmapped layers are currently not supported
	[Dec17 09:44] overlayfs: idmapped layers are currently not supported
	[  +5.066669] overlayfs: idmapped layers are currently not supported
	[ +38.827173] overlayfs: idmapped layers are currently not supported
	[Dec17 09:45] overlayfs: idmapped layers are currently not supported
	[Dec17 09:46] overlayfs: idmapped layers are currently not supported
	[Dec17 09:48] overlayfs: idmapped layers are currently not supported
	[  +5.468161] overlayfs: idmapped layers are currently not supported
	[Dec17 09:49] overlayfs: idmapped layers are currently not supported
	[  +4.263444] overlayfs: idmapped layers are currently not supported
	[Dec17 09:50] overlayfs: idmapped layers are currently not supported
	[Dec17 10:07] overlayfs: idmapped layers are currently not supported
	[Dec17 10:08] overlayfs: idmapped layers are currently not supported
	[Dec17 10:10] overlayfs: idmapped layers are currently not supported
	[Dec17 10:11] overlayfs: idmapped layers are currently not supported
	[Dec17 10:13] overlayfs: idmapped layers are currently not supported
	[Dec17 10:15] overlayfs: idmapped layers are currently not supported
	[Dec17 10:16] overlayfs: idmapped layers are currently not supported
	[Dec17 10:21] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 12:02:04 up 17:44,  0 user,  load average: 0.52, 0.65, 1.23
	Linux no-preload-118262 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 12:02:01 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:02:01 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 480.
	Dec 17 12:02:01 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:02:01 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:02:01 no-preload-118262 kubelet[3833]: E1217 12:02:01.801089    3833 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:02:01 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:02:01 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:02:02 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 17 12:02:02 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:02:02 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:02:02 no-preload-118262 kubelet[3838]: E1217 12:02:02.557399    3838 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:02:02 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:02:02 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:02:03 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 17 12:02:03 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:02:03 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:02:03 no-preload-118262 kubelet[3856]: E1217 12:02:03.243924    3856 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:02:03 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:02:03 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:02:03 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 17 12:02:03 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:02:04 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:02:04 no-preload-118262 kubelet[3945]: E1217 12:02:04.061351    3945 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:02:04 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:02:04 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-118262 -n no-preload-118262
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-118262 -n no-preload-118262: exit status 2 (332.073151ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-118262" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (370.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (107.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-669680 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1217 11:59:03.818294 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/default-k8s-diff-port-224095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:59:28.205413 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:59:31.522865 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/default-k8s-diff-port-224095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-669680 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m46.062048199s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-669680 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-669680
helpers_test.go:244: (dbg) docker inspect newest-cni-669680:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc",
	        "Created": "2025-12-17T11:50:38.904543162Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3205329,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:50:38.98558565Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc/hostname",
	        "HostsPath": "/var/lib/docker/containers/23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc/hosts",
	        "LogPath": "/var/lib/docker/containers/23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc/23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc-json.log",
	        "Name": "/newest-cni-669680",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-669680:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-669680",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc",
	                "LowerDir": "/var/lib/docker/overlay2/0a323466537bd2b84d1c1de1e658b41a849cb439639dbed3e328ce82ab171fd3-init/diff:/var/lib/docker/overlay2/aa1c3cb837db05afa9c265c464cc269fa9c11658f422c1c8858e1287ac952f12/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0a323466537bd2b84d1c1de1e658b41a849cb439639dbed3e328ce82ab171fd3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0a323466537bd2b84d1c1de1e658b41a849cb439639dbed3e328ce82ab171fd3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0a323466537bd2b84d1c1de1e658b41a849cb439639dbed3e328ce82ab171fd3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-669680",
	                "Source": "/var/lib/docker/volumes/newest-cni-669680/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-669680",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-669680",
	                "name.minikube.sigs.k8s.io": "newest-cni-669680",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b6274c51925abac74af91e3b11cb0a4d5cf37e009a5faa7c8800fc2099930727",
	            "SandboxKey": "/var/run/docker/netns/b6274c51925a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36043"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36044"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36047"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36045"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36046"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-669680": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:80:8e:4b:68:67",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e84740d61c89f51b13c32d88b9c5aafc9e8e1ba5e275e3db72c9a38077e44a94",
	                    "EndpointID": "de0b9853e35e2b17e7ac367a79084e643b7446b0efa3f0d2161f29a374748652",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-669680",
	                        "23474ef32ddb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-669680 -n newest-cni-669680
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-669680 -n newest-cni-669680: exit status 6 (308.711466ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 12:00:41.460049 3219336 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-669680" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-669680 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ stop    │ -p embed-certs-628462 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:46 UTC │ 17 Dec 25 11:47 UTC │
	│ addons  │ enable dashboard -p embed-certs-628462 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │ 17 Dec 25 11:47 UTC │
	│ start   │ -p embed-certs-628462 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:47 UTC │ 17 Dec 25 11:47 UTC │
	│ image   │ embed-certs-628462 image list --format=json                                                                                                                                                                                                              │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ pause   │ -p embed-certs-628462 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ unpause │ -p embed-certs-628462 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p embed-certs-628462                                                                                                                                                                                                                                    │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p embed-certs-628462                                                                                                                                                                                                                                    │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p disable-driver-mounts-003095                                                                                                                                                                                                                          │ disable-driver-mounts-003095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ start   │ -p default-k8s-diff-port-224095 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:49 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-224095 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ stop    │ -p default-k8s-diff-port-224095 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-224095 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ start   │ -p default-k8s-diff-port-224095 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:50 UTC │
	│ image   │ default-k8s-diff-port-224095 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ pause   │ -p default-k8s-diff-port-224095 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ unpause │ -p default-k8s-diff-port-224095 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ delete  │ -p default-k8s-diff-port-224095                                                                                                                                                                                                                          │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ delete  │ -p default-k8s-diff-port-224095                                                                                                                                                                                                                          │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ start   │ -p newest-cni-669680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-118262 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	│ stop    │ -p no-preload-118262 --alsologtostderr -v=3                                                                                                                                                                                                              │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ addons  │ enable dashboard -p no-preload-118262 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ start   │ -p no-preload-118262 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-669680 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 11:58 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:55:54
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:55:54.097672 3212985 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:55:54.097800 3212985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:55:54.097813 3212985 out.go:374] Setting ErrFile to fd 2...
	I1217 11:55:54.097821 3212985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:55:54.098207 3212985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 11:55:54.098683 3212985 out.go:368] Setting JSON to false
	I1217 11:55:54.100030 3212985 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":63504,"bootTime":1765909050,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 11:55:54.100100 3212985 start.go:143] virtualization:  
	I1217 11:55:54.103066 3212985 out.go:179] * [no-preload-118262] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 11:55:54.106891 3212985 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 11:55:54.107071 3212985 notify.go:221] Checking for updates...
	I1217 11:55:54.112925 3212985 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:55:54.115810 3212985 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 11:55:54.118639 3212985 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 11:55:54.121576 3212985 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 11:55:54.124535 3212985 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:55:54.127987 3212985 config.go:182] Loaded profile config "no-preload-118262": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:55:54.128592 3212985 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:55:54.156670 3212985 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 11:55:54.156789 3212985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:55:54.209915 3212985 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:55:54.200713336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:55:54.210021 3212985 docker.go:319] overlay module found
	I1217 11:55:54.213155 3212985 out.go:179] * Using the docker driver based on existing profile
	I1217 11:55:54.215964 3212985 start.go:309] selected driver: docker
	I1217 11:55:54.215988 3212985 start.go:927] validating driver "docker" against &{Name:no-preload-118262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-118262 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:55:54.216113 3212985 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:55:54.217029 3212985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:55:54.283254 3212985 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:55:54.273141828 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:55:54.283582 3212985 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:55:54.283614 3212985 cni.go:84] Creating CNI manager for ""
	I1217 11:55:54.283667 3212985 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 11:55:54.283720 3212985 start.go:353] cluster config:
	{Name:no-preload-118262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-118262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:55:54.286974 3212985 out.go:179] * Starting "no-preload-118262" primary control-plane node in "no-preload-118262" cluster
	I1217 11:55:54.289858 3212985 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 11:55:54.292806 3212985 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 11:55:54.295787 3212985 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 11:55:54.295891 3212985 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 11:55:54.295951 3212985 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/config.json ...
	I1217 11:55:54.296271 3212985 cache.go:107] acquiring lock: {Name:mk815fc0c67b76ed2ee0b075f6917d43e67b13d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.296357 3212985 cache.go:115] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1217 11:55:54.296371 3212985 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 114.017µs
	I1217 11:55:54.296388 3212985 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1217 11:55:54.296401 3212985 cache.go:107] acquiring lock: {Name:mk11644c35fa0d35fcf9d5a865af6c28a7df16d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.296484 3212985 cache.go:115] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1217 11:55:54.296496 3212985 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 96.622µs
	I1217 11:55:54.296503 3212985 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1217 11:55:54.296515 3212985 cache.go:107] acquiring lock: {Name:mk02712d952db0244ab56f62810e58a983831503 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.296551 3212985 cache.go:115] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1217 11:55:54.296561 3212985 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 47.679µs
	I1217 11:55:54.296569 3212985 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1217 11:55:54.296591 3212985 cache.go:107] acquiring lock: {Name:mk436387f099b91bd6762b69e3678ebc0f9561cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.296627 3212985 cache.go:115] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1217 11:55:54.296637 3212985 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 52.323µs
	I1217 11:55:54.296644 3212985 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1217 11:55:54.296653 3212985 cache.go:107] acquiring lock: {Name:mkf4cd732ad0857bbeaf7d91402ed78da15112e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.296678 3212985 cache.go:115] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1217 11:55:54.296683 3212985 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 32.098µs
	I1217 11:55:54.296690 3212985 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1217 11:55:54.296699 3212985 cache.go:107] acquiring lock: {Name:mka934c06f25efbc149ef4769eaae5adad4ea53a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.296728 3212985 cache.go:115] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1217 11:55:54.296733 3212985 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 35.873µs
	I1217 11:55:54.296739 3212985 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1217 11:55:54.296748 3212985 cache.go:107] acquiring lock: {Name:mkb53641077bc34de612e9b78566264ac82d9b73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.296778 3212985 cache.go:115] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1217 11:55:54.296787 3212985 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 39.884µs
	I1217 11:55:54.296793 3212985 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1217 11:55:54.296801 3212985 cache.go:107] acquiring lock: {Name:mkca0a51840ba852f371cde8bcc41ec807c30a00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.296838 3212985 cache.go:115] /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1217 11:55:54.296847 3212985 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 46.588µs
	I1217 11:55:54.296856 3212985 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1217 11:55:54.296862 3212985 cache.go:87] Successfully saved all images to host disk.
	I1217 11:55:54.316030 3212985 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 11:55:54.316051 3212985 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 11:55:54.316070 3212985 cache.go:243] Successfully downloaded all kic artifacts
	I1217 11:55:54.316101 3212985 start.go:360] acquireMachinesLock for no-preload-118262: {Name:mka8b15d744256405cc79d3bb936a81c229c3b9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:55:54.316161 3212985 start.go:364] duration metric: took 39.77µs to acquireMachinesLock for "no-preload-118262"
	I1217 11:55:54.316185 3212985 start.go:96] Skipping create...Using existing machine configuration
	I1217 11:55:54.316190 3212985 fix.go:54] fixHost starting: 
	I1217 11:55:54.316490 3212985 cli_runner.go:164] Run: docker container inspect no-preload-118262 --format={{.State.Status}}
	I1217 11:55:54.332759 3212985 fix.go:112] recreateIfNeeded on no-preload-118262: state=Stopped err=<nil>
	W1217 11:55:54.332793 3212985 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 11:55:54.338085 3212985 out.go:252] * Restarting existing docker container for "no-preload-118262" ...
	I1217 11:55:54.338180 3212985 cli_runner.go:164] Run: docker start no-preload-118262
	I1217 11:55:54.606459 3212985 cli_runner.go:164] Run: docker container inspect no-preload-118262 --format={{.State.Status}}
	I1217 11:55:54.631006 3212985 kic.go:432] container "no-preload-118262" state is running.
	I1217 11:55:54.631393 3212985 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-118262
	I1217 11:55:54.652451 3212985 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/config.json ...
	I1217 11:55:54.652674 3212985 machine.go:94] provisionDockerMachine start ...
	I1217 11:55:54.652732 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:55:54.677707 3212985 main.go:143] libmachine: Using SSH client type: native
	I1217 11:55:54.677814 3212985 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36048 <nil> <nil>}
	I1217 11:55:54.677822 3212985 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:55:54.678839 3212985 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 11:55:57.812197 3212985 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-118262
	
	I1217 11:55:57.812222 3212985 ubuntu.go:182] provisioning hostname "no-preload-118262"
	I1217 11:55:57.812295 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:55:57.830852 3212985 main.go:143] libmachine: Using SSH client type: native
	I1217 11:55:57.830954 3212985 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36048 <nil> <nil>}
	I1217 11:55:57.830964 3212985 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-118262 && echo "no-preload-118262" | sudo tee /etc/hostname
	I1217 11:55:57.977757 3212985 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-118262
	
	I1217 11:55:57.977834 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:55:57.995318 3212985 main.go:143] libmachine: Using SSH client type: native
	I1217 11:55:57.995438 3212985 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36048 <nil> <nil>}
	I1217 11:55:57.995454 3212985 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-118262' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-118262/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-118262' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:55:58.129000 3212985 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:55:58.129026 3212985 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 11:55:58.129048 3212985 ubuntu.go:190] setting up certificates
	I1217 11:55:58.129058 3212985 provision.go:84] configureAuth start
	I1217 11:55:58.129137 3212985 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-118262
	I1217 11:55:58.146660 3212985 provision.go:143] copyHostCerts
	I1217 11:55:58.146727 3212985 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 11:55:58.146737 3212985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 11:55:58.146821 3212985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 11:55:58.146983 3212985 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 11:55:58.146989 3212985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 11:55:58.147018 3212985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 11:55:58.147082 3212985 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 11:55:58.147087 3212985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 11:55:58.147112 3212985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 11:55:58.147174 3212985 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.no-preload-118262 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-118262]
	I1217 11:55:58.677412 3212985 provision.go:177] copyRemoteCerts
	I1217 11:55:58.677487 3212985 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:55:58.677537 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:55:58.696153 3212985 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36048 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:55:58.796388 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 11:55:58.813975 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 11:55:58.831950 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 11:55:58.849412 3212985 provision.go:87] duration metric: took 720.33021ms to configureAuth
	I1217 11:55:58.849488 3212985 ubuntu.go:206] setting minikube options for container-runtime
	I1217 11:55:58.849743 3212985 config.go:182] Loaded profile config "no-preload-118262": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:55:58.849760 3212985 machine.go:97] duration metric: took 4.197077033s to provisionDockerMachine
	I1217 11:55:58.849769 3212985 start.go:293] postStartSetup for "no-preload-118262" (driver="docker")
	I1217 11:55:58.849784 3212985 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 11:55:58.849838 3212985 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 11:55:58.849879 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:55:58.867585 3212985 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36048 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:55:58.964748 3212985 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 11:55:58.968333 3212985 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 11:55:58.968360 3212985 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 11:55:58.968372 3212985 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 11:55:58.968454 3212985 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 11:55:58.968542 3212985 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 11:55:58.968640 3212985 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 11:55:58.976685 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 11:55:58.995461 3212985 start.go:296] duration metric: took 145.672692ms for postStartSetup
	I1217 11:55:58.995541 3212985 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:55:58.995586 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:55:59.019538 3212985 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36048 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:55:59.117541 3212985 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 11:55:59.122267 3212985 fix.go:56] duration metric: took 4.80606985s for fixHost
	I1217 11:55:59.122307 3212985 start.go:83] releasing machines lock for "no-preload-118262", held for 4.806123002s
	I1217 11:55:59.122382 3212985 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-118262
	I1217 11:55:59.141556 3212985 ssh_runner.go:195] Run: cat /version.json
	I1217 11:55:59.141603 3212985 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 11:55:59.141611 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:55:59.141660 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:55:59.164620 3212985 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36048 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:55:59.164771 3212985 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36048 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:55:59.351128 3212985 ssh_runner.go:195] Run: systemctl --version
	I1217 11:55:59.358084 3212985 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 11:55:59.362663 3212985 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 11:55:59.362766 3212985 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 11:55:59.371125 3212985 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 11:55:59.371153 3212985 start.go:496] detecting cgroup driver to use...
	I1217 11:55:59.371206 3212985 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 11:55:59.371277 3212985 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 11:55:59.389287 3212985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 11:55:59.403831 3212985 docker.go:218] disabling cri-docker service (if available) ...
	I1217 11:55:59.403893 3212985 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 11:55:59.419497 3212985 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 11:55:59.432548 3212985 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 11:55:59.542751 3212985 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 11:55:59.663663 3212985 docker.go:234] disabling docker service ...
	I1217 11:55:59.663734 3212985 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 11:55:59.680687 3212985 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 11:55:59.694833 3212985 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 11:55:59.829203 3212985 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 11:55:59.950677 3212985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 11:55:59.964080 3212985 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 11:55:59.978475 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 11:55:59.987229 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 11:55:59.996111 3212985 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 11:55:59.996210 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 11:56:00.040190 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 11:56:00.080003 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 11:56:00.111408 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 11:56:00.135837 3212985 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 11:56:00.154709 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 11:56:00.192639 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 11:56:00.215745 3212985 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 11:56:00.252832 3212985 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 11:56:00.276526 3212985 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 11:56:00.295796 3212985 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:56:00.437457 3212985 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 11:56:00.567606 3212985 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 11:56:00.567753 3212985 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 11:56:00.572865 3212985 start.go:564] Will wait 60s for crictl version
	I1217 11:56:00.572972 3212985 ssh_runner.go:195] Run: which crictl
	I1217 11:56:00.577625 3212985 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 11:56:00.604322 3212985 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 11:56:00.604502 3212985 ssh_runner.go:195] Run: containerd --version
	I1217 11:56:00.631560 3212985 ssh_runner.go:195] Run: containerd --version
	I1217 11:56:00.656469 3212985 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1217 11:56:00.659351 3212985 cli_runner.go:164] Run: docker network inspect no-preload-118262 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 11:56:00.676349 3212985 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 11:56:00.680496 3212985 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:56:00.690917 3212985 kubeadm.go:884] updating cluster {Name:no-preload-118262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-118262 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:56:00.691048 3212985 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 11:56:00.691104 3212985 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:56:00.720555 3212985 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 11:56:00.720582 3212985 cache_images.go:86] Images are preloaded, skipping loading
	I1217 11:56:00.720590 3212985 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1217 11:56:00.720694 3212985 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-118262 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-118262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 11:56:00.720772 3212985 ssh_runner.go:195] Run: sudo crictl info
	I1217 11:56:00.749210 3212985 cni.go:84] Creating CNI manager for ""
	I1217 11:56:00.749238 3212985 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 11:56:00.749254 3212985 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 11:56:00.749310 3212985 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-118262 NodeName:no-preload-118262 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:56:00.749505 3212985 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-118262"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:56:00.749576 3212985 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 11:56:00.757442 3212985 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 11:56:00.757524 3212985 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:56:00.765473 3212985 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1217 11:56:00.778740 3212985 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 11:56:00.792394 3212985 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1217 11:56:00.806454 3212985 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 11:56:00.810279 3212985 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:56:00.820510 3212985 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:56:00.934464 3212985 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:56:00.950819 3212985 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262 for IP: 192.168.85.2
	I1217 11:56:00.950891 3212985 certs.go:195] generating shared ca certs ...
	I1217 11:56:00.950922 3212985 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:00.951114 3212985 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 11:56:00.951194 3212985 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 11:56:00.951232 3212985 certs.go:257] generating profile certs ...
	I1217 11:56:00.951382 3212985 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/client.key
	I1217 11:56:00.951530 3212985 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/apiserver.key.082f94c0
	I1217 11:56:00.951606 3212985 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/proxy-client.key
	I1217 11:56:00.951762 3212985 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 11:56:00.951827 3212985 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 11:56:00.951867 3212985 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:56:00.951923 3212985 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 11:56:00.952000 3212985 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:56:00.952049 3212985 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 11:56:00.952133 3212985 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 11:56:00.952760 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:56:00.978711 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:56:00.997524 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:56:01.017452 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 11:56:01.037516 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 11:56:01.055698 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 11:56:01.078786 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:56:01.098475 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/no-preload-118262/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 11:56:01.116977 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 11:56:01.136015 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:56:01.156004 3212985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 11:56:01.175302 3212985 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:56:01.190197 3212985 ssh_runner.go:195] Run: openssl version
	I1217 11:56:01.197107 3212985 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 11:56:01.205490 3212985 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 11:56:01.214061 3212985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 11:56:01.218349 3212985 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 11:56:01.218423 3212985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 11:56:01.260525 3212985 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 11:56:01.268554 3212985 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:56:01.276397 3212985 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:56:01.284768 3212985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:56:01.289382 3212985 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:56:01.289516 3212985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:56:01.331248 3212985 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:56:01.338774 3212985 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 11:56:01.346651 3212985 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 11:56:01.354564 3212985 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 11:56:01.358698 3212985 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 11:56:01.358775 3212985 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 11:56:01.400939 3212985 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 11:56:01.408692 3212985 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:56:01.412548 3212985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 11:56:01.453899 3212985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 11:56:01.495014 3212985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 11:56:01.536150 3212985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 11:56:01.577723 3212985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 11:56:01.619271 3212985 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 11:56:01.660657 3212985 kubeadm.go:401] StartCluster: {Name:no-preload-118262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-118262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:56:01.660750 3212985 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 11:56:01.660833 3212985 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:56:01.687958 3212985 cri.go:89] found id: ""
	I1217 11:56:01.688081 3212985 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:56:01.696230 3212985 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 11:56:01.696252 3212985 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 11:56:01.696304 3212985 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 11:56:01.704102 3212985 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 11:56:01.704665 3212985 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-118262" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 11:56:01.705100 3212985 kubeconfig.go:62] /home/jenkins/minikube-integration/22182-2922712/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-118262" cluster setting kubeconfig missing "no-preload-118262" context setting]
	I1217 11:56:01.705938 3212985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:01.707388 3212985 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 11:56:01.717641 3212985 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1217 11:56:01.717678 3212985 kubeadm.go:602] duration metric: took 21.41966ms to restartPrimaryControlPlane
	I1217 11:56:01.717689 3212985 kubeadm.go:403] duration metric: took 57.040291ms to StartCluster
	I1217 11:56:01.717705 3212985 settings.go:142] acquiring lock: {Name:mkbe14d68dd5ff3fa1749157c50305d115f5a24d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:01.717769 3212985 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 11:56:01.718373 3212985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:56:01.718582 3212985 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 11:56:01.718926 3212985 config.go:182] Loaded profile config "no-preload-118262": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:56:01.718998 3212985 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 11:56:01.719125 3212985 addons.go:70] Setting storage-provisioner=true in profile "no-preload-118262"
	I1217 11:56:01.719146 3212985 addons.go:239] Setting addon storage-provisioner=true in "no-preload-118262"
	I1217 11:56:01.719168 3212985 host.go:66] Checking if "no-preload-118262" exists ...
	I1217 11:56:01.719170 3212985 addons.go:70] Setting dashboard=true in profile "no-preload-118262"
	I1217 11:56:01.719228 3212985 addons.go:239] Setting addon dashboard=true in "no-preload-118262"
	W1217 11:56:01.719262 3212985 addons.go:248] addon dashboard should already be in state true
	I1217 11:56:01.719308 3212985 host.go:66] Checking if "no-preload-118262" exists ...
	I1217 11:56:01.719638 3212985 cli_runner.go:164] Run: docker container inspect no-preload-118262 --format={{.State.Status}}
	I1217 11:56:01.719916 3212985 cli_runner.go:164] Run: docker container inspect no-preload-118262 --format={{.State.Status}}
	I1217 11:56:01.720320 3212985 addons.go:70] Setting default-storageclass=true in profile "no-preload-118262"
	I1217 11:56:01.720337 3212985 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-118262"
	I1217 11:56:01.720702 3212985 cli_runner.go:164] Run: docker container inspect no-preload-118262 --format={{.State.Status}}
	I1217 11:56:01.724486 3212985 out.go:179] * Verifying Kubernetes components...
	I1217 11:56:01.727633 3212985 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:56:01.766705 3212985 addons.go:239] Setting addon default-storageclass=true in "no-preload-118262"
	I1217 11:56:01.766751 3212985 host.go:66] Checking if "no-preload-118262" exists ...
	I1217 11:56:01.767177 3212985 cli_runner.go:164] Run: docker container inspect no-preload-118262 --format={{.State.Status}}
	I1217 11:56:01.793928 3212985 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:56:01.799560 3212985 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:56:01.799586 3212985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 11:56:01.799655 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:56:01.806809 3212985 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 11:56:01.806838 3212985 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 11:56:01.806902 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:56:01.809039 3212985 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 11:56:01.812535 3212985 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 11:56:01.817510 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 11:56:01.817535 3212985 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 11:56:01.817604 3212985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-118262
	I1217 11:56:01.867768 3212985 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36048 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:56:01.868081 3212985 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36048 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:56:01.868642 3212985 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36048 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/no-preload-118262/id_ed25519 Username:docker}
	I1217 11:56:01.951610 3212985 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:56:02.024186 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 11:56:02.024265 3212985 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 11:56:02.043227 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 11:56:02.043295 3212985 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 11:56:02.048999 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 11:56:02.054810 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:56:02.086887 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 11:56:02.086960 3212985 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 11:56:02.105255 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 11:56:02.105288 3212985 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 11:56:02.121678 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 11:56:02.121719 3212985 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 11:56:02.137737 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 11:56:02.137779 3212985 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 11:56:02.153356 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 11:56:02.153397 3212985 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 11:56:02.168513 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 11:56:02.168557 3212985 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 11:56:02.185798 3212985 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 11:56:02.185838 3212985 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 11:56:02.201465 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:02.758705 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:02.758748 3212985 retry.go:31] will retry after 229.540303ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:02.758805 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:02.758819 3212985 retry.go:31] will retry after 199.856736ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:02.759004 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:02.759019 3212985 retry.go:31] will retry after 172.784882ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:02.759090 3212985 node_ready.go:35] waiting up to 6m0s for node "no-preload-118262" to be "Ready" ...
	I1217 11:56:02.932840 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 11:56:02.959390 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:56:02.988844 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:03.015687 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:03.015717 3212985 retry.go:31] will retry after 427.179701ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:03.053926 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:03.053954 3212985 retry.go:31] will retry after 351.36ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:03.071903 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:03.071938 3212985 retry.go:31] will retry after 460.512525ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:03.405971 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:56:03.443451 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:03.475863 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:03.475958 3212985 retry.go:31] will retry after 760.184682ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:03.533075 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:03.533848 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:03.533930 3212985 retry.go:31] will retry after 500.153362ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:03.629508 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:03.629561 3212985 retry.go:31] will retry after 828.549967ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:04.034401 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:04.098672 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:04.098706 3212985 retry.go:31] will retry after 456.814782ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:04.236935 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:04.357588 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:04.357621 3212985 retry.go:31] will retry after 773.010299ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:04.458872 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:04.516437 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:04.516469 3212985 retry.go:31] will retry after 1.201644683s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:04.556582 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:04.622293 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:04.622328 3212985 retry.go:31] will retry after 1.824101164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:04.760127 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:05.131775 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:05.197068 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:05.197101 3212985 retry.go:31] will retry after 718.007742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:05.719362 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:05.829095 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:05.829128 3212985 retry.go:31] will retry after 1.266711526s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:05.915322 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:05.976930 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:05.976963 3212985 retry.go:31] will retry after 983.864547ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:06.446716 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:06.526752 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:06.526789 3212985 retry.go:31] will retry after 1.791049068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:06.962003 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:07.021949 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:07.021981 3212985 retry.go:31] will retry after 3.775428423s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:07.096119 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:07.154813 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:07.154841 3212985 retry.go:31] will retry after 1.6043331s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:07.261035 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:08.318583 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:08.381665 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:08.381708 3212985 retry.go:31] will retry after 3.517495633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:08.759662 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:08.864890 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:08.864925 3212985 retry.go:31] will retry after 2.28260361s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:09.760003 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:10.798319 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:10.860002 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:10.860034 3212985 retry.go:31] will retry after 4.82591476s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:11.148644 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:11.216089 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:11.216123 3212985 retry.go:31] will retry after 6.175133091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:11.900240 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:11.969428 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:11.969471 3212985 retry.go:31] will retry after 2.437731885s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:12.260387 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:14.408207 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:14.471530 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:14.471564 3212985 retry.go:31] will retry after 7.973001246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:14.760396 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:15.686226 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:15.751739 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:15.751823 3212985 retry.go:31] will retry after 4.990913672s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:16.760725 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:17.392069 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:17.450109 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:17.450199 3212985 retry.go:31] will retry after 4.605565076s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:19.260667 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:20.743360 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:20.828411 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:20.828465 3212985 retry.go:31] will retry after 11.110369506s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:21.759604 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:22.056015 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:22.115928 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:22.116008 3212985 retry.go:31] will retry after 10.310245173s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:22.444820 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:22.509039 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:22.509077 3212985 retry.go:31] will retry after 13.279816116s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:23.759723 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:26.259612 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:28.260333 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:30.759694 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:31.939110 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:32.013382 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:32.013417 3212985 retry.go:31] will retry after 17.792843999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:32.426534 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:32.489297 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:32.489332 3212985 retry.go:31] will retry after 10.214719089s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:32.760038 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:35.259670 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:35.789928 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:35.882559 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:35.882592 3212985 retry.go:31] will retry after 9.227629247s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:37.260542 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:39.759863 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:42.259856 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:42.704316 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:42.766302 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:42.766335 3212985 retry.go:31] will retry after 16.793347769s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:44.260613 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:45.111278 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:56:45.238863 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:45.238978 3212985 retry.go:31] will retry after 27.971446484s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:46.759704 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:48.760327 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:49.806921 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:56:49.869443 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:49.869476 3212985 retry.go:31] will retry after 26.53119581s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:56:50.760462 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:53.259758 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:55.759768 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:56:58.259631 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:56:59.560400 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:56:59.620061 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:56:59.620096 3212985 retry.go:31] will retry after 23.364320547s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:57:00.259931 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:02.761870 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:05.259592 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:07.759629 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:09.760369 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:12.259883 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:57:13.211445 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:57:13.333945 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:57:13.333983 3212985 retry.go:31] will retry after 44.1812533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:57:14.260040 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:16.260864 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:57:16.401385 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:57:16.464135 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 11:57:16.464167 3212985 retry.go:31] will retry after 45.341892172s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:57:18.759728 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:20.760439 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:57:22.985044 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 11:57:23.107103 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:57:23.107213 3212985 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1217 11:57:23.259704 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:25.759680 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:28.259589 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:30.759848 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:33.259631 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:35.259696 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:37.759650 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:39.759939 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:42.259840 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:44.759690 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:46.760542 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:49.259755 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:51.259894 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:53.259968 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:57:55.760675 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:57:57.516068 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 11:57:57.626784 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:57:57.626882 3212985 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1217 11:57:58.259617 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:00.259717 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:58:01.806452 3212985 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 11:58:01.869925 3212985 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 11:58:01.870041 3212985 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 11:58:01.873020 3212985 out.go:179] * Enabled addons: 
	I1217 11:58:01.875902 3212985 addons.go:530] duration metric: took 2m0.156897144s for enable addons: enabled=[]
	W1217 11:58:02.759596 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:04.759668 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:07.259570 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:09.259781 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:11.759720 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:14.259689 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:16.759733 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:19.259603 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:21.259694 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:23.759673 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:26.259638 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:28.259781 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:30.759896 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:33.259742 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:35.759679 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:38.259699 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:40.759816 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:43.259680 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:45.260027 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:47.759590 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 11:58:53.028972 3204903 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001753064s
	I1217 11:58:53.029259 3204903 kubeadm.go:319] 
	I1217 11:58:53.029324 3204903 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 11:58:53.029359 3204903 kubeadm.go:319] 	- The kubelet is not running
	I1217 11:58:53.029464 3204903 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 11:58:53.029468 3204903 kubeadm.go:319] 
	I1217 11:58:53.029572 3204903 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 11:58:53.029604 3204903 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 11:58:53.029645 3204903 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 11:58:53.029650 3204903 kubeadm.go:319] 
	I1217 11:58:53.035722 3204903 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 11:58:53.036145 3204903 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 11:58:53.036254 3204903 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:58:53.036508 3204903 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 11:58:53.036516 3204903 kubeadm.go:319] 
	I1217 11:58:53.036585 3204903 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 11:58:53.036636 3204903 kubeadm.go:403] duration metric: took 8m6.348588119s to StartCluster
	I1217 11:58:53.036680 3204903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 11:58:53.036746 3204903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 11:58:53.092234 3204903 cri.go:89] found id: ""
	I1217 11:58:53.092255 3204903 logs.go:282] 0 containers: []
	W1217 11:58:53.092264 3204903 logs.go:284] No container was found matching "kube-apiserver"
	I1217 11:58:53.092270 3204903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 11:58:53.092329 3204903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 11:58:53.120381 3204903 cri.go:89] found id: ""
	I1217 11:58:53.120404 3204903 logs.go:282] 0 containers: []
	W1217 11:58:53.120412 3204903 logs.go:284] No container was found matching "etcd"
	I1217 11:58:53.120440 3204903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 11:58:53.120504 3204903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 11:58:53.150913 3204903 cri.go:89] found id: ""
	I1217 11:58:53.150935 3204903 logs.go:282] 0 containers: []
	W1217 11:58:53.150943 3204903 logs.go:284] No container was found matching "coredns"
	I1217 11:58:53.150949 3204903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 11:58:53.151010 3204903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 11:58:53.177002 3204903 cri.go:89] found id: ""
	I1217 11:58:53.177028 3204903 logs.go:282] 0 containers: []
	W1217 11:58:53.177037 3204903 logs.go:284] No container was found matching "kube-scheduler"
	I1217 11:58:53.177044 3204903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 11:58:53.177105 3204903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 11:58:53.202075 3204903 cri.go:89] found id: ""
	I1217 11:58:53.202101 3204903 logs.go:282] 0 containers: []
	W1217 11:58:53.202109 3204903 logs.go:284] No container was found matching "kube-proxy"
	I1217 11:58:53.202116 3204903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 11:58:53.202175 3204903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 11:58:53.230674 3204903 cri.go:89] found id: ""
	I1217 11:58:53.230701 3204903 logs.go:282] 0 containers: []
	W1217 11:58:53.230709 3204903 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 11:58:53.230716 3204903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 11:58:53.230773 3204903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 11:58:53.256007 3204903 cri.go:89] found id: ""
	I1217 11:58:53.256034 3204903 logs.go:282] 0 containers: []
	W1217 11:58:53.256042 3204903 logs.go:284] No container was found matching "kindnet"
	I1217 11:58:53.256053 3204903 logs.go:123] Gathering logs for kubelet ...
	I1217 11:58:53.256065 3204903 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 11:58:53.314487 3204903 logs.go:123] Gathering logs for dmesg ...
	I1217 11:58:53.314524 3204903 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 11:58:53.331203 3204903 logs.go:123] Gathering logs for describe nodes ...
	I1217 11:58:53.331240 3204903 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 11:58:53.399250 3204903 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 11:58:53.390312    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:53.390861    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:53.392589    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:53.393116    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:53.394713    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 11:58:53.390312    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:53.390861    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:53.392589    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:53.393116    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 11:58:53.394713    4867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 11:58:53.399274 3204903 logs.go:123] Gathering logs for containerd ...
	I1217 11:58:53.399288 3204903 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 11:58:53.439803 3204903 logs.go:123] Gathering logs for container status ...
	I1217 11:58:53.439840 3204903 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 11:58:53.468929 3204903 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001753064s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 11:58:53.469033 3204903 out.go:285] * 
	W1217 11:58:53.469130 3204903 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001753064s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 11:58:53.469177 3204903 out.go:285] * 
	W1217 11:58:53.471457 3204903 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 11:58:53.476865 3204903 out.go:203] 
	W1217 11:58:53.479927 3204903 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001753064s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 11:58:53.479964 3204903 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 11:58:53.479988 3204903 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 11:58:53.483089 3204903 out.go:203] 
	W1217 11:58:49.759638 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:52.259559 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:54.260573 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:56.759512 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:58:59.259711 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:01.759695 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:03.759859 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:06.259746 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:08.759609 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:10.760559 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:13.259644 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:15.259686 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:17.259753 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:19.260149 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:21.759780 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:24.260629 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:26.759690 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:28.759892 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:31.260312 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:33.759633 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:35.759926 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:37.760524 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:40.259605 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:42.259707 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:44.260580 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:46.759769 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:49.260572 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:51.760687 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:54.259612 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:56.259701 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 11:59:58.759564 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:00.765674 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:03.260705 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:05.759690 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:08.259770 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:10.759636 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:12.759685 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:14.760245 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:17.259662 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:19.260014 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:21.760329 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:24.260230 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:26.260366 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:28.759678 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:31.259726 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:33.759557 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:35.759737 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:37.760125 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.755552932Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.755645459Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.755785534Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.755875928Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.755955130Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.756051382Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.756130872Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.756213471Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.756303471Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.756462466Z" level=info msg="Connect containerd service"
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.756851273Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.757629330Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.779637015Z" level=info msg="Start subscribing containerd event"
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.779721591Z" level=info msg="Start recovering state"
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.784722605Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.784946994Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.838108706Z" level=info msg="Start event monitor"
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.838307741Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.838371346Z" level=info msg="Start streaming server"
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.838434852Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.838493632Z" level=info msg="runtime interface starting up..."
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.838556605Z" level=info msg="starting plugins..."
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.838618462Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 11:50:44 newest-cni-669680 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 17 11:50:44 newest-cni-669680 containerd[760]: time="2025-12-17T11:50:44.840932253Z" level=info msg="containerd successfully booted in 0.112312s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:00:42.196500    6048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:00:42.197324    6048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:00:42.199237    6048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:00:42.199849    6048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:00:42.201724    6048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +26.532481] overlayfs: idmapped layers are currently not supported
	[Dec17 09:26] overlayfs: idmapped layers are currently not supported
	[Dec17 09:27] overlayfs: idmapped layers are currently not supported
	[Dec17 09:29] overlayfs: idmapped layers are currently not supported
	[Dec17 09:31] overlayfs: idmapped layers are currently not supported
	[Dec17 09:41] overlayfs: idmapped layers are currently not supported
	[Dec17 09:43] overlayfs: idmapped layers are currently not supported
	[Dec17 09:44] overlayfs: idmapped layers are currently not supported
	[  +5.066669] overlayfs: idmapped layers are currently not supported
	[ +38.827173] overlayfs: idmapped layers are currently not supported
	[Dec17 09:45] overlayfs: idmapped layers are currently not supported
	[Dec17 09:46] overlayfs: idmapped layers are currently not supported
	[Dec17 09:48] overlayfs: idmapped layers are currently not supported
	[  +5.468161] overlayfs: idmapped layers are currently not supported
	[Dec17 09:49] overlayfs: idmapped layers are currently not supported
	[  +4.263444] overlayfs: idmapped layers are currently not supported
	[Dec17 09:50] overlayfs: idmapped layers are currently not supported
	[Dec17 10:07] overlayfs: idmapped layers are currently not supported
	[Dec17 10:08] overlayfs: idmapped layers are currently not supported
	[Dec17 10:10] overlayfs: idmapped layers are currently not supported
	[Dec17 10:11] overlayfs: idmapped layers are currently not supported
	[Dec17 10:13] overlayfs: idmapped layers are currently not supported
	[Dec17 10:15] overlayfs: idmapped layers are currently not supported
	[Dec17 10:16] overlayfs: idmapped layers are currently not supported
	[Dec17 10:21] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 12:00:42 up 17:43,  0 user,  load average: 0.74, 0.72, 1.31
	Linux newest-cni-669680 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 12:00:39 newest-cni-669680 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:00:39 newest-cni-669680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 462.
	Dec 17 12:00:39 newest-cni-669680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:00:40 newest-cni-669680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:00:40 newest-cni-669680 kubelet[5934]: E1217 12:00:40.088762    5934 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:00:40 newest-cni-669680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:00:40 newest-cni-669680 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:00:40 newest-cni-669680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 463.
	Dec 17 12:00:40 newest-cni-669680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:00:40 newest-cni-669680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:00:40 newest-cni-669680 kubelet[5940]: E1217 12:00:40.821149    5940 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:00:40 newest-cni-669680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:00:40 newest-cni-669680 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:00:41 newest-cni-669680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 464.
	Dec 17 12:00:41 newest-cni-669680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:00:41 newest-cni-669680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:00:41 newest-cni-669680 kubelet[5966]: E1217 12:00:41.588099    5966 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:00:41 newest-cni-669680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:00:41 newest-cni-669680 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:00:42 newest-cni-669680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 465.
	Dec 17 12:00:42 newest-cni-669680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:00:42 newest-cni-669680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:00:42 newest-cni-669680 kubelet[6057]: E1217 12:00:42.328898    6057 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:00:42 newest-cni-669680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:00:42 newest-cni-669680 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-669680 -n newest-cni-669680
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-669680 -n newest-cni-669680: exit status 6 (325.12618ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 12:00:42.758239 3219565 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-669680" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-669680" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (107.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (375.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-669680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-669680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 105 (6m10.074524229s)

                                                
                                                
-- stdout --
	* [newest-cni-669680] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "newest-cni-669680" primary control-plane node in "newest-cni-669680" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 12:00:44.347526 3219848 out.go:360] Setting OutFile to fd 1 ...
	I1217 12:00:44.347663 3219848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:00:44.347673 3219848 out.go:374] Setting ErrFile to fd 2...
	I1217 12:00:44.347678 3219848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:00:44.347938 3219848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 12:00:44.348321 3219848 out.go:368] Setting JSON to false
	I1217 12:00:44.349222 3219848 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":63795,"bootTime":1765909050,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 12:00:44.349300 3219848 start.go:143] virtualization:  
	I1217 12:00:44.352466 3219848 out.go:179] * [newest-cni-669680] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 12:00:44.356190 3219848 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 12:00:44.356282 3219848 notify.go:221] Checking for updates...
	I1217 12:00:44.362135 3219848 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 12:00:44.365177 3219848 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 12:00:44.368881 3219848 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 12:00:44.372015 3219848 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 12:00:44.375014 3219848 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 12:00:44.378336 3219848 config.go:182] Loaded profile config "newest-cni-669680": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 12:00:44.378951 3219848 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 12:00:44.413369 3219848 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 12:00:44.413513 3219848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 12:00:44.473970 3219848 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 12:00:44.464532408 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 12:00:44.474081 3219848 docker.go:319] overlay module found
	I1217 12:00:44.477205 3219848 out.go:179] * Using the docker driver based on existing profile
	I1217 12:00:44.480155 3219848 start.go:309] selected driver: docker
	I1217 12:00:44.480182 3219848 start.go:927] validating driver "docker" against &{Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:00:44.480300 3219848 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 12:00:44.481122 3219848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 12:00:44.568687 3219848 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 12:00:44.559079636 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 12:00:44.569054 3219848 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 12:00:44.569088 3219848 cni.go:84] Creating CNI manager for ""
	I1217 12:00:44.569145 3219848 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 12:00:44.569196 3219848 start.go:353] cluster config:
	{Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:00:44.574245 3219848 out.go:179] * Starting "newest-cni-669680" primary control-plane node in "newest-cni-669680" cluster
	I1217 12:00:44.576964 3219848 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 12:00:44.579814 3219848 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 12:00:44.582545 3219848 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 12:00:44.582593 3219848 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1217 12:00:44.582604 3219848 cache.go:65] Caching tarball of preloaded images
	I1217 12:00:44.582624 3219848 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 12:00:44.582700 3219848 preload.go:238] Found /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 12:00:44.582711 3219848 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1217 12:00:44.582826 3219848 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/config.json ...
	I1217 12:00:44.602190 3219848 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 12:00:44.602216 3219848 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 12:00:44.602262 3219848 cache.go:243] Successfully downloaded all kic artifacts
	I1217 12:00:44.602326 3219848 start.go:360] acquireMachinesLock for newest-cni-669680: {Name:mk48c8383b245a4b70f2208fe2e76b80693bbb09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 12:00:44.602428 3219848 start.go:364] duration metric: took 68.29µs to acquireMachinesLock for "newest-cni-669680"
	I1217 12:00:44.602457 3219848 start.go:96] Skipping create...Using existing machine configuration
	I1217 12:00:44.602505 3219848 fix.go:54] fixHost starting: 
	I1217 12:00:44.602917 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:44.620734 3219848 fix.go:112] recreateIfNeeded on newest-cni-669680: state=Stopped err=<nil>
	W1217 12:00:44.620765 3219848 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 12:00:44.623987 3219848 out.go:252] * Restarting existing docker container for "newest-cni-669680" ...
	I1217 12:00:44.624072 3219848 cli_runner.go:164] Run: docker start newest-cni-669680
	I1217 12:00:44.870900 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:44.893559 3219848 kic.go:432] container "newest-cni-669680" state is running.
	I1217 12:00:44.894282 3219848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 12:00:44.917205 3219848 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/config.json ...
	I1217 12:00:44.917570 3219848 machine.go:94] provisionDockerMachine start ...
	I1217 12:00:44.917645 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:44.945980 3219848 main.go:143] libmachine: Using SSH client type: native
	I1217 12:00:44.946096 3219848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36053 <nil> <nil>}
	I1217 12:00:44.946104 3219848 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 12:00:44.946864 3219848 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 12:00:48.084367 3219848 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-669680
	
	I1217 12:00:48.084399 3219848 ubuntu.go:182] provisioning hostname "newest-cni-669680"
	I1217 12:00:48.084507 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:48.104367 3219848 main.go:143] libmachine: Using SSH client type: native
	I1217 12:00:48.104656 3219848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36053 <nil> <nil>}
	I1217 12:00:48.104680 3219848 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-669680 && echo "newest-cni-669680" | sudo tee /etc/hostname
	I1217 12:00:48.247265 3219848 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-669680
	
	I1217 12:00:48.247353 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:48.270652 3219848 main.go:143] libmachine: Using SSH client type: native
	I1217 12:00:48.270788 3219848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36053 <nil> <nil>}
	I1217 12:00:48.270817 3219848 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-669680' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-669680/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-669680' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 12:00:48.417473 3219848 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 12:00:48.417557 3219848 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 12:00:48.417596 3219848 ubuntu.go:190] setting up certificates
	I1217 12:00:48.417639 3219848 provision.go:84] configureAuth start
	I1217 12:00:48.417749 3219848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 12:00:48.437471 3219848 provision.go:143] copyHostCerts
	I1217 12:00:48.437568 3219848 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 12:00:48.437587 3219848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 12:00:48.437717 3219848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 12:00:48.437858 3219848 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 12:00:48.437877 3219848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 12:00:48.437916 3219848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 12:00:48.438005 3219848 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 12:00:48.438028 3219848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 12:00:48.438055 3219848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 12:00:48.438157 3219848 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.newest-cni-669680 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-669680]
	I1217 12:00:48.577436 3219848 provision.go:177] copyRemoteCerts
	I1217 12:00:48.577506 3219848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 12:00:48.577546 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:48.595338 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:48.692538 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 12:00:48.711734 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 12:00:48.729881 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 12:00:48.748237 3219848 provision.go:87] duration metric: took 330.555362ms to configureAuth
	I1217 12:00:48.748262 3219848 ubuntu.go:206] setting minikube options for container-runtime
	I1217 12:00:48.748550 3219848 config.go:182] Loaded profile config "newest-cni-669680": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 12:00:48.748561 3219848 machine.go:97] duration metric: took 3.830976751s to provisionDockerMachine
	I1217 12:00:48.748569 3219848 start.go:293] postStartSetup for "newest-cni-669680" (driver="docker")
	I1217 12:00:48.748581 3219848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 12:00:48.748643 3219848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 12:00:48.748683 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:48.766578 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:48.864654 3219848 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 12:00:48.868220 3219848 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 12:00:48.868249 3219848 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 12:00:48.868261 3219848 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 12:00:48.868318 3219848 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 12:00:48.868401 3219848 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 12:00:48.868523 3219848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 12:00:48.876210 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 12:00:48.894408 3219848 start.go:296] duration metric: took 145.823675ms for postStartSetup
	I1217 12:00:48.894507 3219848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 12:00:48.894563 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:48.913872 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:49.010734 3219848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 12:00:49.017136 3219848 fix.go:56] duration metric: took 4.414624566s for fixHost
	I1217 12:00:49.017182 3219848 start.go:83] releasing machines lock for "newest-cni-669680", held for 4.414721098s
	I1217 12:00:49.017319 3219848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 12:00:49.041576 3219848 ssh_runner.go:195] Run: cat /version.json
	I1217 12:00:49.041642 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:49.041898 3219848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 12:00:49.041972 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:49.071567 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:49.072178 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:49.261249 3219848 ssh_runner.go:195] Run: systemctl --version
	I1217 12:00:49.267897 3219848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 12:00:49.272503 3219848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 12:00:49.272574 3219848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 12:00:49.280715 3219848 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 12:00:49.280743 3219848 start.go:496] detecting cgroup driver to use...
	I1217 12:00:49.280787 3219848 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 12:00:49.280844 3219848 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 12:00:49.298858 3219848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 12:00:49.313120 3219848 docker.go:218] disabling cri-docker service (if available) ...
	I1217 12:00:49.313230 3219848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 12:00:49.329245 3219848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 12:00:49.342531 3219848 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 12:00:49.461223 3219848 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 12:00:49.579409 3219848 docker.go:234] disabling docker service ...
	I1217 12:00:49.579510 3219848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 12:00:49.594800 3219848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 12:00:49.608313 3219848 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 12:00:49.737460 3219848 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 12:00:49.883222 3219848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 12:00:49.897339 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 12:00:49.911914 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 12:00:49.921268 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 12:00:49.930257 3219848 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 12:00:49.930398 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 12:00:49.939639 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 12:00:49.948689 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 12:00:49.958342 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 12:00:49.967395 3219848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 12:00:49.975730 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 12:00:49.984582 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 12:00:49.993553 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 12:00:50.009983 3219848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 12:00:50.019753 3219848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 12:00:50.028837 3219848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:00:50.142686 3219848 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 12:00:50.264183 3219848 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 12:00:50.264308 3219848 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 12:00:50.268160 3219848 start.go:564] Will wait 60s for crictl version
	I1217 12:00:50.268261 3219848 ssh_runner.go:195] Run: which crictl
	I1217 12:00:50.271790 3219848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 12:00:50.298148 3219848 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 12:00:50.298258 3219848 ssh_runner.go:195] Run: containerd --version
	I1217 12:00:50.318643 3219848 ssh_runner.go:195] Run: containerd --version
	I1217 12:00:50.346609 3219848 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1217 12:00:50.349545 3219848 cli_runner.go:164] Run: docker network inspect newest-cni-669680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 12:00:50.366603 3219848 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 12:00:50.370482 3219848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 12:00:50.383622 3219848 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 12:00:50.386526 3219848 kubeadm.go:884] updating cluster {Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 12:00:50.386672 3219848 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 12:00:50.386774 3219848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:00:50.415106 3219848 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 12:00:50.415132 3219848 containerd.go:534] Images already preloaded, skipping extraction
	I1217 12:00:50.415224 3219848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:00:50.444492 3219848 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 12:00:50.444517 3219848 cache_images.go:86] Images are preloaded, skipping loading
	I1217 12:00:50.444526 3219848 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1217 12:00:50.444639 3219848 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-669680 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 12:00:50.444718 3219848 ssh_runner.go:195] Run: sudo crictl info
	I1217 12:00:50.471453 3219848 cni.go:84] Creating CNI manager for ""
	I1217 12:00:50.471478 3219848 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 12:00:50.471497 3219848 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 12:00:50.471553 3219848 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-669680 NodeName:newest-cni-669680 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 12:00:50.471711 3219848 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-669680"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 12:00:50.471828 3219848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 12:00:50.480867 3219848 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 12:00:50.480998 3219848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 12:00:50.488686 3219848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1217 12:00:50.504356 3219848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 12:00:50.520176 3219848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1217 12:00:50.535930 3219848 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 12:00:50.540134 3219848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 12:00:50.550629 3219848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:00:50.669384 3219848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 12:00:50.685420 3219848 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680 for IP: 192.168.76.2
	I1217 12:00:50.685479 3219848 certs.go:195] generating shared ca certs ...
	I1217 12:00:50.685497 3219848 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:00:50.685634 3219848 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 12:00:50.685683 3219848 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 12:00:50.685690 3219848 certs.go:257] generating profile certs ...
	I1217 12:00:50.685787 3219848 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.key
	I1217 12:00:50.685851 3219848 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key.e7646161
	I1217 12:00:50.685893 3219848 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key
	I1217 12:00:50.686084 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 12:00:50.686149 3219848 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 12:00:50.686177 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 12:00:50.686225 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 12:00:50.686286 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 12:00:50.686340 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 12:00:50.686422 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 12:00:50.687047 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 12:00:50.710384 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 12:00:50.730920 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 12:00:50.751265 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 12:00:50.772018 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 12:00:50.790833 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 12:00:50.810114 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 12:00:50.828402 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 12:00:50.846753 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 12:00:50.865705 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 12:00:50.886567 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 12:00:50.904533 3219848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 12:00:50.917457 3219848 ssh_runner.go:195] Run: openssl version
	I1217 12:00:50.923993 3219848 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:00:50.931839 3219848 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 12:00:50.939507 3219848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:00:50.943237 3219848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:00:50.943304 3219848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:00:50.984637 3219848 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 12:00:50.992168 3219848 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 12:00:50.999795 3219848 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 12:00:51.020372 3219848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 12:00:51.024379 3219848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 12:00:51.024566 3219848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 12:00:51.066006 3219848 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 12:00:51.074211 3219848 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 12:00:51.082049 3219848 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 12:00:51.090651 3219848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 12:00:51.094888 3219848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 12:00:51.095004 3219848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 12:00:51.137313 3219848 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 12:00:51.145186 3219848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 12:00:51.149385 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 12:00:51.191456 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 12:00:51.232840 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 12:00:51.275219 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 12:00:51.317313 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 12:00:51.358746 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 12:00:51.399851 3219848 kubeadm.go:401] StartCluster: {Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:00:51.399946 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 12:00:51.400058 3219848 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 12:00:51.427405 3219848 cri.go:89] found id: ""
	I1217 12:00:51.427480 3219848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 12:00:51.435564 3219848 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 12:00:51.435593 3219848 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 12:00:51.435648 3219848 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 12:00:51.443379 3219848 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 12:00:51.443986 3219848 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-669680" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 12:00:51.444236 3219848 kubeconfig.go:62] /home/jenkins/minikube-integration/22182-2922712/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-669680" cluster setting kubeconfig missing "newest-cni-669680" context setting]
	I1217 12:00:51.444696 3219848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:00:51.446096 3219848 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 12:00:51.454141 3219848 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1217 12:00:51.454214 3219848 kubeadm.go:602] duration metric: took 18.613293ms to restartPrimaryControlPlane
	I1217 12:00:51.454230 3219848 kubeadm.go:403] duration metric: took 54.392206ms to StartCluster
	I1217 12:00:51.454245 3219848 settings.go:142] acquiring lock: {Name:mkbe14d68dd5ff3fa1749157c50305d115f5a24d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:00:51.454304 3219848 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 12:00:51.455245 3219848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:00:51.455481 3219848 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 12:00:51.455797 3219848 config.go:182] Loaded profile config "newest-cni-669680": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 12:00:51.455846 3219848 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 12:00:51.455911 3219848 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-669680"
	I1217 12:00:51.455924 3219848 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-669680"
	I1217 12:00:51.455953 3219848 host.go:66] Checking if "newest-cni-669680" exists ...
	I1217 12:00:51.456410 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:51.456591 3219848 addons.go:70] Setting dashboard=true in profile "newest-cni-669680"
	I1217 12:00:51.457002 3219848 addons.go:239] Setting addon dashboard=true in "newest-cni-669680"
	W1217 12:00:51.457012 3219848 addons.go:248] addon dashboard should already be in state true
	I1217 12:00:51.457034 3219848 host.go:66] Checking if "newest-cni-669680" exists ...
	I1217 12:00:51.457458 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:51.456605 3219848 addons.go:70] Setting default-storageclass=true in profile "newest-cni-669680"
	I1217 12:00:51.458033 3219848 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-669680"
	I1217 12:00:51.458306 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:51.460659 3219848 out.go:179] * Verifying Kubernetes components...
	I1217 12:00:51.463611 3219848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:00:51.495379 3219848 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 12:00:51.502753 3219848 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:00:51.502777 3219848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 12:00:51.502845 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:51.511997 3219848 addons.go:239] Setting addon default-storageclass=true in "newest-cni-669680"
	I1217 12:00:51.512038 3219848 host.go:66] Checking if "newest-cni-669680" exists ...
	I1217 12:00:51.512543 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:51.527586 3219848 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 12:00:51.536600 3219848 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 12:00:51.539513 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 12:00:51.539539 3219848 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 12:00:51.539612 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:51.555471 3219848 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 12:00:51.555502 3219848 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 12:00:51.555570 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:51.569622 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:51.592016 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:51.601832 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:51.689678 3219848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 12:00:51.731294 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:00:51.749491 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:00:51.814469 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 12:00:51.814496 3219848 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 12:00:51.839602 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 12:00:51.839672 3219848 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 12:00:51.852764 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 12:00:51.852827 3219848 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 12:00:51.865089 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 12:00:51.865152 3219848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 12:00:51.878190 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 12:00:51.878259 3219848 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 12:00:51.890831 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 12:00:51.890854 3219848 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 12:00:51.903270 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 12:00:51.903294 3219848 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 12:00:51.916127 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 12:00:51.916153 3219848 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 12:00:51.929059 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 12:00:51.929123 3219848 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 12:00:51.942273 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:52.502896 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.502968 3219848 retry.go:31] will retry after 269.884821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:00:52.503026 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.503067 3219848 retry.go:31] will retry after 319.702383ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.503040 3219848 api_server.go:52] waiting for apiserver process to appear ...
	I1217 12:00:52.503258 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:00:52.503300 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.503321 3219848 retry.go:31] will retry after 196.810414ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.700893 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:52.770562 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.770599 3219848 retry.go:31] will retry after 481.518663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.773838 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:00:52.823221 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:00:52.855276 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.855328 3219848 retry.go:31] will retry after 391.667259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:00:52.894877 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.894917 3219848 retry.go:31] will retry after 200.928151ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.004579 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:53.096394 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:00:53.155868 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.155897 3219848 retry.go:31] will retry after 564.238822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.248228 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:00:53.253066 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:53.368787 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.368822 3219848 retry.go:31] will retry after 377.070742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:00:53.369052 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.369071 3219848 retry.go:31] will retry after 485.691157ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.504052 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:53.720468 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:00:53.746162 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:00:53.794993 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.795027 3219848 retry.go:31] will retry after 872.052872ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:00:53.811480 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.811533 3219848 retry.go:31] will retry after 558.92589ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.855758 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:53.922708 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.922745 3219848 retry.go:31] will retry after 803.451465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:54.003704 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:54.370776 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:00:54.437621 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:54.437652 3219848 retry.go:31] will retry after 1.190014231s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:54.503835 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:54.667963 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:00:54.726498 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:54.728210 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:54.728287 3219848 retry.go:31] will retry after 1.413986656s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:00:54.813279 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:54.813372 3219848 retry.go:31] will retry after 1.840693776s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:55.005986 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:55.504112 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:55.628242 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:00:55.689054 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:55.689136 3219848 retry.go:31] will retry after 1.799425819s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:56.003624 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:56.142943 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:00:56.205592 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:56.205625 3219848 retry.go:31] will retry after 2.655712888s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:56.503981 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:56.654730 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:56.717604 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:56.717641 3219848 retry.go:31] will retry after 1.909418395s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:57.004223 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:57.489437 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:00:57.503984 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:00:57.562808 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:57.562840 3219848 retry.go:31] will retry after 3.72719526s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:58.014740 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:58.503409 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:58.627253 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:58.690443 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:58.690481 3219848 retry.go:31] will retry after 3.549926007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:58.861704 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:00:58.923654 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:58.923683 3219848 retry.go:31] will retry after 2.058003245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:59.003967 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:59.504167 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:00.018808 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:00.504031 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:00.982724 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:01:01.004335 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:01.111365 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:01.111399 3219848 retry.go:31] will retry after 3.900095446s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:01.291002 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:01:01.368946 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:01.368996 3219848 retry.go:31] will retry after 3.675584678s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:01.503381 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:02.004403 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:02.241403 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:01:02.307939 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:02.307978 3219848 retry.go:31] will retry after 5.738469139s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:02.504084 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:03.003562 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:03.503472 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:04.005140 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:04.503830 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:05.003702 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:05.012660 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:01:05.045335 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:01:05.083423 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:05.083461 3219848 retry.go:31] will retry after 9.235586003s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:01:05.118369 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:05.118401 3219848 retry.go:31] will retry after 3.828272571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:05.503857 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:06.003637 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:06.504078 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:07.003401 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:07.503344 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:08.004170 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:08.047658 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:01:08.113675 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:08.113710 3219848 retry.go:31] will retry after 7.390134832s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:08.504355 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:08.946950 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:01:09.003509 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:09.011595 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:09.011629 3219848 retry.go:31] will retry after 14.170665244s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:09.503956 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:10.018957 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:10.503456 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:11.004169 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:11.503808 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:12.003522 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:12.503603 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:13.003862 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:13.503472 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:14.004363 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:14.319308 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:01:14.385208 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:14.385243 3219848 retry.go:31] will retry after 5.459360953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:14.503378 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:15.006355 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:15.504086 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:15.504108 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:01:15.572879 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:15.572915 3219848 retry.go:31] will retry after 11.777794795s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:16.005530 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:16.503503 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:17.003649 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:17.503430 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:18.005004 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:18.504088 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:19.003423 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:19.503667 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:19.845708 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:01:19.909350 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:19.909381 3219848 retry.go:31] will retry after 9.722081791s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:20.003736 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:20.503967 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:21.004457 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:21.504148 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:22.003426 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:22.504235 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:23.004166 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:23.183313 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:01:23.244255 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:23.244289 3219848 retry.go:31] will retry after 19.619062537s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:23.503427 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:24.006966 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:24.503758 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:25.004125 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:25.503463 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:26.004155 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:26.504576 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:27.003556 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:27.351598 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:01:27.419162 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:27.419195 3219848 retry.go:31] will retry after 15.164194741s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:27.503619 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:28.003385 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:28.503474 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:29.004314 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:29.503968 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:29.632290 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:01:29.699987 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:29.700018 3219848 retry.go:31] will retry after 12.658501476s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:30.003430 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:30.503407 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:31.003818 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:31.504094 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:32.003845 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:32.503410 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:33.005413 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:33.503962 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:34.003405 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:34.503770 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:35.004969 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:35.504211 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:36.003492 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:36.503881 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:37.008063 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:37.504267 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:38.004154 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:38.504195 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:39.005022 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:39.504074 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:40.009459 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:40.504054 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:41.004134 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:41.504134 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:42.003867 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:42.359033 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:01:42.424319 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:42.424350 3219848 retry.go:31] will retry after 39.499798177s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:42.503565 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:42.584549 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:01:42.654579 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:42.654612 3219848 retry.go:31] will retry after 22.182784721s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:42.864124 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:01:42.925874 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:42.925916 3219848 retry.go:31] will retry after 18.241160237s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:43.004102 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:43.504356 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:44.004028 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:44.503929 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:45.003640 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:45.503747 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:46.003443 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:46.503967 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:47.003372 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:47.503601 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:48.003536 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:48.503987 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:49.003434 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:49.504162 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:50.003493 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:50.503875 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:51.004324 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:51.503888 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:01:51.503983 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:01:51.536666 3219848 cri.go:89] found id: ""
	I1217 12:01:51.536689 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.536698 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:01:51.536704 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:01:51.536768 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:01:51.562047 3219848 cri.go:89] found id: ""
	I1217 12:01:51.562070 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.562078 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:01:51.562084 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:01:51.562149 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:01:51.586286 3219848 cri.go:89] found id: ""
	I1217 12:01:51.586309 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.586317 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:01:51.586323 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:01:51.586381 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:01:51.611834 3219848 cri.go:89] found id: ""
	I1217 12:01:51.611858 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.611867 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:01:51.611873 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:01:51.611942 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:01:51.637620 3219848 cri.go:89] found id: ""
	I1217 12:01:51.637643 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.637651 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:01:51.637658 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:01:51.637715 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:01:51.663176 3219848 cri.go:89] found id: ""
	I1217 12:01:51.663198 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.663206 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:01:51.663212 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:01:51.663273 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:01:51.688038 3219848 cri.go:89] found id: ""
	I1217 12:01:51.688064 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.688083 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:01:51.688090 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:01:51.688159 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:01:51.715834 3219848 cri.go:89] found id: ""
	I1217 12:01:51.715860 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.715870 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:01:51.715879 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:01:51.715890 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:01:51.772533 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:01:51.772567 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:01:51.788370 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:01:51.788400 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:01:51.855552 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:01:51.847275    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.848081    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.849574    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.849998    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.851493    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:01:51.847275    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.848081    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.849574    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.849998    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.851493    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:01:51.855615 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:01:51.855635 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:01:51.880660 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:01:51.880693 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:01:54.414807 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:54.425488 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:01:54.425558 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:01:54.453841 3219848 cri.go:89] found id: ""
	I1217 12:01:54.453870 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.453880 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:01:54.453887 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:01:54.453946 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:01:54.478957 3219848 cri.go:89] found id: ""
	I1217 12:01:54.478982 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.478991 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:01:54.478998 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:01:54.479060 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:01:54.504488 3219848 cri.go:89] found id: ""
	I1217 12:01:54.504516 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.504535 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:01:54.504543 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:01:54.504606 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:01:54.529418 3219848 cri.go:89] found id: ""
	I1217 12:01:54.529445 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.529454 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:01:54.529460 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:01:54.529519 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:01:54.557757 3219848 cri.go:89] found id: ""
	I1217 12:01:54.557781 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.557790 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:01:54.557797 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:01:54.557854 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:01:54.586961 3219848 cri.go:89] found id: ""
	I1217 12:01:54.586996 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.587004 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:01:54.587011 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:01:54.587077 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:01:54.612590 3219848 cri.go:89] found id: ""
	I1217 12:01:54.612617 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.612626 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:01:54.612633 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:01:54.612694 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:01:54.638207 3219848 cri.go:89] found id: ""
	I1217 12:01:54.638234 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.638243 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:01:54.638253 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:01:54.638264 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:01:54.695917 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:01:54.695955 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:01:54.712729 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:01:54.712759 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:01:54.782298 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:01:54.774102    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.774684    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.776463    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.776850    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.778510    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:01:54.774102    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.774684    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.776463    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.776850    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.778510    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:01:54.782321 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:01:54.782333 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:01:54.807165 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:01:54.807196 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:01:57.336099 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:57.346978 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:01:57.347048 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:01:57.371132 3219848 cri.go:89] found id: ""
	I1217 12:01:57.371155 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.371163 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:01:57.371169 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:01:57.371232 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:01:57.396905 3219848 cri.go:89] found id: ""
	I1217 12:01:57.396933 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.396942 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:01:57.396948 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:01:57.397011 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:01:57.425337 3219848 cri.go:89] found id: ""
	I1217 12:01:57.425366 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.425374 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:01:57.425381 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:01:57.425440 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:01:57.449681 3219848 cri.go:89] found id: ""
	I1217 12:01:57.449709 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.449718 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:01:57.449725 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:01:57.449784 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:01:57.475302 3219848 cri.go:89] found id: ""
	I1217 12:01:57.475328 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.475337 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:01:57.475343 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:01:57.475412 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:01:57.500270 3219848 cri.go:89] found id: ""
	I1217 12:01:57.500344 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.500369 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:01:57.500389 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:01:57.500509 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:01:57.527492 3219848 cri.go:89] found id: ""
	I1217 12:01:57.527519 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.527532 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:01:57.527538 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:01:57.527650 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:01:57.553482 3219848 cri.go:89] found id: ""
	I1217 12:01:57.553549 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.553576 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:01:57.553602 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:01:57.553627 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:01:57.609257 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:01:57.609292 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:01:57.625325 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:01:57.625352 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:01:57.691022 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:01:57.682604    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.683106    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.684793    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.685506    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.687043    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:01:57.682604    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.683106    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.684793    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.685506    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.687043    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:01:57.691048 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:01:57.691061 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:01:57.716301 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:01:57.716333 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:00.244802 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:00.315692 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:00.315780 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:00.376798 3219848 cri.go:89] found id: ""
	I1217 12:02:00.376842 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.376852 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:00.376859 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:00.376949 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:00.414474 3219848 cri.go:89] found id: ""
	I1217 12:02:00.414502 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.414513 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:00.414520 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:00.414590 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:00.447266 3219848 cri.go:89] found id: ""
	I1217 12:02:00.447306 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.447316 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:00.447323 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:00.447415 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:00.477352 3219848 cri.go:89] found id: ""
	I1217 12:02:00.477378 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.477387 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:00.477394 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:00.477457 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:00.506577 3219848 cri.go:89] found id: ""
	I1217 12:02:00.506605 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.506614 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:00.506621 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:00.506720 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:00.533943 3219848 cri.go:89] found id: ""
	I1217 12:02:00.533966 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.533975 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:00.533982 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:00.534051 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:00.560396 3219848 cri.go:89] found id: ""
	I1217 12:02:00.560462 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.560472 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:00.560479 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:00.560573 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:00.587859 3219848 cri.go:89] found id: ""
	I1217 12:02:00.587931 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.587955 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:00.587983 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:00.588035 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:00.620134 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:00.620217 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:00.677187 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:00.677223 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:00.694138 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:00.694242 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:00.762938 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:00.753622    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.754338    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.755466    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.757073    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.757631    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:00.753622    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.754338    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.755466    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.757073    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.757631    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:00.763025 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:00.763058 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:01.167394 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:02:01.232118 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:02:01.232151 3219848 retry.go:31] will retry after 39.797194994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:02:03.292559 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:03.304708 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:03.304784 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:03.332491 3219848 cri.go:89] found id: ""
	I1217 12:02:03.332511 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.332519 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:03.332526 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:03.332630 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:03.361080 3219848 cri.go:89] found id: ""
	I1217 12:02:03.361107 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.361115 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:03.361121 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:03.361179 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:03.397354 3219848 cri.go:89] found id: ""
	I1217 12:02:03.397382 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.397391 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:03.397397 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:03.397473 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:03.431465 3219848 cri.go:89] found id: ""
	I1217 12:02:03.431493 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.431502 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:03.431509 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:03.431569 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:03.464102 3219848 cri.go:89] found id: ""
	I1217 12:02:03.464125 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.464133 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:03.464139 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:03.464197 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:03.497848 3219848 cri.go:89] found id: ""
	I1217 12:02:03.497879 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.497888 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:03.497895 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:03.497952 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:03.568108 3219848 cri.go:89] found id: ""
	I1217 12:02:03.568130 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.568139 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:03.568144 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:03.568202 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:03.632108 3219848 cri.go:89] found id: ""
	I1217 12:02:03.632136 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.632151 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:03.632161 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:03.632173 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:03.724972 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:03.708641    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.709073    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.716627    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.717278    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.719035    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:03.708641    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.709073    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.716627    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.717278    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.719035    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:03.725000 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:03.725012 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:03.753083 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:03.753174 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:03.790574 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:03.790596 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:03.863404 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:03.863488 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:04.837606 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:02:04.901525 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:02:04.901562 3219848 retry.go:31] will retry after 21.256241349s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:02:06.385200 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:06.395642 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:06.395734 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:06.422500 3219848 cri.go:89] found id: ""
	I1217 12:02:06.422526 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.422535 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:06.422542 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:06.422603 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:06.449741 3219848 cri.go:89] found id: ""
	I1217 12:02:06.449763 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.449773 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:06.449779 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:06.449836 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:06.478823 3219848 cri.go:89] found id: ""
	I1217 12:02:06.478844 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.478852 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:06.478858 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:06.478924 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:06.507270 3219848 cri.go:89] found id: ""
	I1217 12:02:06.507298 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.507307 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:06.507313 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:06.507390 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:06.536741 3219848 cri.go:89] found id: ""
	I1217 12:02:06.536774 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.536783 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:06.536790 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:06.536859 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:06.569124 3219848 cri.go:89] found id: ""
	I1217 12:02:06.569152 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.569161 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:06.569168 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:06.569223 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:06.597119 3219848 cri.go:89] found id: ""
	I1217 12:02:06.597140 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.597148 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:06.597155 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:06.597213 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:06.623129 3219848 cri.go:89] found id: ""
	I1217 12:02:06.623152 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.623161 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:06.623171 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:06.623181 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:06.679634 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:06.679669 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:06.696235 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:06.696273 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:06.764004 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:06.755277    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.755704    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.757595    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.758654    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.760132    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:06.755277    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.755704    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.757595    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.758654    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.760132    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:06.764031 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:06.764044 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:06.789440 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:06.789478 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:09.319544 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:09.335051 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:09.335144 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:09.363250 3219848 cri.go:89] found id: ""
	I1217 12:02:09.363278 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.363288 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:09.363296 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:09.363357 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:09.387533 3219848 cri.go:89] found id: ""
	I1217 12:02:09.387598 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.387624 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:09.387646 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:09.387735 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:09.411943 3219848 cri.go:89] found id: ""
	I1217 12:02:09.411970 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.411978 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:09.411985 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:09.412042 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:09.438061 3219848 cri.go:89] found id: ""
	I1217 12:02:09.438127 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.438151 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:09.438167 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:09.438250 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:09.463378 3219848 cri.go:89] found id: ""
	I1217 12:02:09.463407 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.463415 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:09.463422 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:09.463481 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:09.494069 3219848 cri.go:89] found id: ""
	I1217 12:02:09.494098 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.494107 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:09.494114 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:09.494178 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:09.526694 3219848 cri.go:89] found id: ""
	I1217 12:02:09.526771 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.526795 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:09.526815 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:09.526923 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:09.553523 3219848 cri.go:89] found id: ""
	I1217 12:02:09.553585 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.553616 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:09.553641 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:09.553678 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:09.618427 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:09.618463 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:09.634212 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:09.634244 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:09.696895 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:09.688293    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.688801    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.690481    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.690806    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.692990    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:09.688293    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.688801    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.690481    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.690806    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.692990    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:09.696914 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:09.696926 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:09.722288 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:09.722324 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:12.249861 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:12.261558 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:12.261626 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:12.293092 3219848 cri.go:89] found id: ""
	I1217 12:02:12.293113 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.293121 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:12.293128 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:12.293188 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:12.319347 3219848 cri.go:89] found id: ""
	I1217 12:02:12.319374 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.319384 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:12.319390 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:12.319448 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:12.343912 3219848 cri.go:89] found id: ""
	I1217 12:02:12.343939 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.343948 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:12.343955 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:12.344013 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:12.370544 3219848 cri.go:89] found id: ""
	I1217 12:02:12.370571 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.370581 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:12.370587 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:12.370645 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:12.397552 3219848 cri.go:89] found id: ""
	I1217 12:02:12.397578 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.397587 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:12.397593 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:12.397652 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:12.421606 3219848 cri.go:89] found id: ""
	I1217 12:02:12.421673 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.421699 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:12.421715 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:12.421791 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:12.447065 3219848 cri.go:89] found id: ""
	I1217 12:02:12.447088 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.447097 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:12.447103 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:12.447169 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:12.473547 3219848 cri.go:89] found id: ""
	I1217 12:02:12.473575 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.473583 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:12.473645 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:12.473670 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:12.489529 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:12.489559 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:12.574945 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:12.562789    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.567073    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.567687    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.569241    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.569901    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:12.562789    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.567073    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.567687    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.569241    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.569901    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:12.574970 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:12.574986 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:12.601521 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:12.601562 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:12.633893 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:12.633920 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:15.190960 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:15.202334 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:15.202461 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:15.231453 3219848 cri.go:89] found id: ""
	I1217 12:02:15.231486 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.231495 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:15.231507 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:15.231609 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:15.264097 3219848 cri.go:89] found id: ""
	I1217 12:02:15.264120 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.264129 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:15.264135 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:15.264196 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:15.293547 3219848 cri.go:89] found id: ""
	I1217 12:02:15.293574 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.293583 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:15.293589 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:15.293650 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:15.321905 3219848 cri.go:89] found id: ""
	I1217 12:02:15.321968 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.321991 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:15.322013 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:15.322084 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:15.349052 3219848 cri.go:89] found id: ""
	I1217 12:02:15.349085 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.349095 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:15.349102 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:15.349175 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:15.374350 3219848 cri.go:89] found id: ""
	I1217 12:02:15.374377 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.374387 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:15.374394 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:15.374457 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:15.412039 3219848 cri.go:89] found id: ""
	I1217 12:02:15.412066 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.412075 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:15.412082 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:15.412153 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:15.441228 3219848 cri.go:89] found id: ""
	I1217 12:02:15.441255 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.441265 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:15.441274 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:15.441309 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:15.467564 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:15.467601 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:15.501031 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:15.501100 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:15.564025 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:15.564059 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:15.581879 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:15.581906 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:15.647244 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:15.638661    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.639327    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.641006    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.641615    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.643194    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:15.638661    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.639327    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.641006    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.641615    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.643194    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:18.147543 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:18.158738 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:18.158817 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:18.184828 3219848 cri.go:89] found id: ""
	I1217 12:02:18.184853 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.184862 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:18.184869 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:18.184931 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:18.211904 3219848 cri.go:89] found id: ""
	I1217 12:02:18.211935 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.211944 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:18.211950 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:18.212010 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:18.237088 3219848 cri.go:89] found id: ""
	I1217 12:02:18.237154 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.237170 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:18.237177 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:18.237239 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:18.278916 3219848 cri.go:89] found id: ""
	I1217 12:02:18.278943 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.278953 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:18.278960 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:18.279018 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:18.307105 3219848 cri.go:89] found id: ""
	I1217 12:02:18.307133 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.307143 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:18.307150 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:18.307210 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:18.336099 3219848 cri.go:89] found id: ""
	I1217 12:02:18.336132 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.336141 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:18.336148 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:18.336217 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:18.362366 3219848 cri.go:89] found id: ""
	I1217 12:02:18.362432 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.362456 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:18.362472 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:18.362547 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:18.388125 3219848 cri.go:89] found id: ""
	I1217 12:02:18.388151 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.388160 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:18.388169 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:18.388180 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:18.456052 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:18.446941    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.447634    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.449296    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.449839    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.451474    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:18.446941    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.447634    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.449296    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.449839    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.451474    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:18.456114 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:18.456134 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:18.481868 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:18.481899 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:18.525523 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:18.525600 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:18.594163 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:18.594200 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:21.113595 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:21.124720 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:21.124792 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:21.150373 3219848 cri.go:89] found id: ""
	I1217 12:02:21.150397 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.150406 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:21.150412 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:21.150471 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:21.179044 3219848 cri.go:89] found id: ""
	I1217 12:02:21.179069 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.179078 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:21.179085 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:21.179156 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:21.205105 3219848 cri.go:89] found id: ""
	I1217 12:02:21.205132 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.205141 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:21.205147 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:21.205207 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:21.230210 3219848 cri.go:89] found id: ""
	I1217 12:02:21.230235 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.230243 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:21.230251 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:21.230328 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:21.265026 3219848 cri.go:89] found id: ""
	I1217 12:02:21.265052 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.265061 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:21.265068 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:21.265128 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:21.302976 3219848 cri.go:89] found id: ""
	I1217 12:02:21.303002 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.303017 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:21.303025 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:21.303097 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:21.333258 3219848 cri.go:89] found id: ""
	I1217 12:02:21.333282 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.333292 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:21.333299 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:21.333361 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:21.359283 3219848 cri.go:89] found id: ""
	I1217 12:02:21.359308 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.359317 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:21.359327 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:21.359338 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:21.416901 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:21.416944 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:21.433045 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:21.433074 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:21.505849 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:21.494474    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.495106    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.496640    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.497253    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.500699    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:21.494474    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.495106    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.496640    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.497253    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.500699    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:21.505920 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:21.505948 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:21.534970 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:21.535156 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:21.925292 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:02:21.990437 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:02:21.990546 3219848 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 12:02:24.077604 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:24.089001 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:24.089072 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:24.120652 3219848 cri.go:89] found id: ""
	I1217 12:02:24.120677 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.120688 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:24.120695 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:24.120755 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:24.147236 3219848 cri.go:89] found id: ""
	I1217 12:02:24.147263 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.147273 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:24.147280 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:24.147339 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:24.173122 3219848 cri.go:89] found id: ""
	I1217 12:02:24.173147 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.173157 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:24.173163 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:24.173223 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:24.207220 3219848 cri.go:89] found id: ""
	I1217 12:02:24.207243 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.207253 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:24.207259 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:24.207324 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:24.232981 3219848 cri.go:89] found id: ""
	I1217 12:02:24.233004 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.233013 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:24.233020 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:24.233087 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:24.266790 3219848 cri.go:89] found id: ""
	I1217 12:02:24.266815 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.266825 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:24.266832 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:24.266896 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:24.299029 3219848 cri.go:89] found id: ""
	I1217 12:02:24.299056 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.299065 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:24.299072 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:24.299150 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:24.332940 3219848 cri.go:89] found id: ""
	I1217 12:02:24.332966 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.332975 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:24.332984 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:24.332994 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:24.358486 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:24.358520 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:24.395087 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:24.395119 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:24.453543 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:24.453581 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:24.469070 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:24.469100 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:24.547838 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:24.537508    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.538320    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.540311    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.542086    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.542713    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:24.537508    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.538320    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.540311    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.542086    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.542713    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:26.158720 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:02:26.235734 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:02:26.235852 3219848 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 12:02:27.048020 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:27.058730 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:27.058803 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:27.083792 3219848 cri.go:89] found id: ""
	I1217 12:02:27.083815 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.083824 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:27.083831 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:27.083893 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:27.110794 3219848 cri.go:89] found id: ""
	I1217 12:02:27.110820 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.110841 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:27.110865 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:27.110940 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:27.136730 3219848 cri.go:89] found id: ""
	I1217 12:02:27.136760 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.136768 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:27.136775 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:27.136833 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:27.161755 3219848 cri.go:89] found id: ""
	I1217 12:02:27.161780 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.161813 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:27.161819 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:27.161886 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:27.187885 3219848 cri.go:89] found id: ""
	I1217 12:02:27.187912 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.187921 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:27.187928 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:27.187987 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:27.214398 3219848 cri.go:89] found id: ""
	I1217 12:02:27.214424 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.214432 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:27.214440 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:27.214528 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:27.240617 3219848 cri.go:89] found id: ""
	I1217 12:02:27.240642 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.240652 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:27.240658 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:27.240740 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:27.272907 3219848 cri.go:89] found id: ""
	I1217 12:02:27.272985 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.273008 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:27.273034 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:27.273061 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:27.338834 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:27.338872 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:27.355488 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:27.355518 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:27.425201 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:27.415325    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.415952    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.418308    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.419305    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.420311    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:27.415325    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.415952    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.418308    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.419305    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.420311    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:27.425231 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:27.425245 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:27.451232 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:27.451264 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:29.988282 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:29.998906 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:29.998982 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:30.032593 3219848 cri.go:89] found id: ""
	I1217 12:02:30.032619 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.032628 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:30.032635 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:30.032703 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:30.065200 3219848 cri.go:89] found id: ""
	I1217 12:02:30.065230 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.065239 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:30.065247 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:30.065319 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:30.100730 3219848 cri.go:89] found id: ""
	I1217 12:02:30.100758 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.100767 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:30.100773 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:30.100837 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:30.127247 3219848 cri.go:89] found id: ""
	I1217 12:02:30.127273 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.127293 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:30.127299 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:30.127380 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:30.156586 3219848 cri.go:89] found id: ""
	I1217 12:02:30.156611 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.156619 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:30.156627 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:30.156692 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:30.182150 3219848 cri.go:89] found id: ""
	I1217 12:02:30.182174 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.182215 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:30.182222 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:30.182285 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:30.209339 3219848 cri.go:89] found id: ""
	I1217 12:02:30.209366 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.209376 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:30.209383 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:30.209443 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:30.235224 3219848 cri.go:89] found id: ""
	I1217 12:02:30.235250 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.235259 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:30.235268 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:30.235279 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:30.305932 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:30.297455    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.298291    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.300025    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.300319    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.301797    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:30.297455    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.298291    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.300025    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.300319    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.301797    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:30.305955 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:30.305968 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:30.335249 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:30.335282 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:30.366831 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:30.366859 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:30.423045 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:30.423081 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:32.941855 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:32.953974 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:32.954052 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:32.986211 3219848 cri.go:89] found id: ""
	I1217 12:02:32.986233 3219848 logs.go:282] 0 containers: []
	W1217 12:02:32.986242 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:32.986249 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:32.986333 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:33.015180 3219848 cri.go:89] found id: ""
	I1217 12:02:33.015209 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.015218 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:33.015227 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:33.015292 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:33.043066 3219848 cri.go:89] found id: ""
	I1217 12:02:33.043132 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.043182 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:33.043216 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:33.043303 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:33.070150 3219848 cri.go:89] found id: ""
	I1217 12:02:33.070178 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.070187 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:33.070194 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:33.070254 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:33.099464 3219848 cri.go:89] found id: ""
	I1217 12:02:33.099502 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.099511 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:33.099519 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:33.099592 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:33.125134 3219848 cri.go:89] found id: ""
	I1217 12:02:33.125161 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.125170 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:33.125177 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:33.125238 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:33.152585 3219848 cri.go:89] found id: ""
	I1217 12:02:33.152608 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.152617 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:33.152638 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:33.152703 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:33.177715 3219848 cri.go:89] found id: ""
	I1217 12:02:33.177740 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.177749 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:33.177759 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:33.177770 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:33.234986 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:33.235024 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:33.255146 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:33.255186 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:33.339613 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:33.330741    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.331497    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.333272    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.333726    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.334950    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:33.330741    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.331497    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.333272    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.333726    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.334950    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:33.339647 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:33.339660 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:33.366064 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:33.366101 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:35.894549 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:35.904950 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:35.905022 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:35.933462 3219848 cri.go:89] found id: ""
	I1217 12:02:35.933485 3219848 logs.go:282] 0 containers: []
	W1217 12:02:35.933493 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:35.933499 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:35.933558 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:35.958161 3219848 cri.go:89] found id: ""
	I1217 12:02:35.958228 3219848 logs.go:282] 0 containers: []
	W1217 12:02:35.958254 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:35.958275 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:35.958364 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:35.983016 3219848 cri.go:89] found id: ""
	I1217 12:02:35.983041 3219848 logs.go:282] 0 containers: []
	W1217 12:02:35.983051 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:35.983057 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:35.983126 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:36.015482 3219848 cri.go:89] found id: ""
	I1217 12:02:36.015527 3219848 logs.go:282] 0 containers: []
	W1217 12:02:36.015536 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:36.015543 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:36.015620 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:36.046357 3219848 cri.go:89] found id: ""
	I1217 12:02:36.046393 3219848 logs.go:282] 0 containers: []
	W1217 12:02:36.046406 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:36.046416 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:36.046577 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:36.072553 3219848 cri.go:89] found id: ""
	I1217 12:02:36.072587 3219848 logs.go:282] 0 containers: []
	W1217 12:02:36.072596 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:36.072602 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:36.072662 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:36.099878 3219848 cri.go:89] found id: ""
	I1217 12:02:36.099911 3219848 logs.go:282] 0 containers: []
	W1217 12:02:36.099927 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:36.099934 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:36.100024 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:36.129180 3219848 cri.go:89] found id: ""
	I1217 12:02:36.129203 3219848 logs.go:282] 0 containers: []
	W1217 12:02:36.129212 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:36.129221 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:36.129234 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:36.186216 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:36.186254 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:36.203136 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:36.203166 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:36.273412 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:36.264653    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.265536    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.267226    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.267782    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.269421    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:36.264653    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.265536    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.267226    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.267782    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.269421    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:36.273433 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:36.273446 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:36.300346 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:36.300378 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:38.840293 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:38.851323 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:38.851395 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:38.878324 3219848 cri.go:89] found id: ""
	I1217 12:02:38.878347 3219848 logs.go:282] 0 containers: []
	W1217 12:02:38.878356 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:38.878362 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:38.878418 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:38.904803 3219848 cri.go:89] found id: ""
	I1217 12:02:38.904824 3219848 logs.go:282] 0 containers: []
	W1217 12:02:38.904833 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:38.904839 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:38.904897 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:38.929044 3219848 cri.go:89] found id: ""
	I1217 12:02:38.929067 3219848 logs.go:282] 0 containers: []
	W1217 12:02:38.929075 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:38.929081 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:38.929148 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:38.953075 3219848 cri.go:89] found id: ""
	I1217 12:02:38.953101 3219848 logs.go:282] 0 containers: []
	W1217 12:02:38.953109 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:38.953119 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:38.953179 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:38.982538 3219848 cri.go:89] found id: ""
	I1217 12:02:38.982560 3219848 logs.go:282] 0 containers: []
	W1217 12:02:38.982569 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:38.982575 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:38.982634 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:39.009774 3219848 cri.go:89] found id: ""
	I1217 12:02:39.009797 3219848 logs.go:282] 0 containers: []
	W1217 12:02:39.009806 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:39.009813 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:39.009877 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:39.035772 3219848 cri.go:89] found id: ""
	I1217 12:02:39.035848 3219848 logs.go:282] 0 containers: []
	W1217 12:02:39.035872 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:39.035894 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:39.035966 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:39.070261 3219848 cri.go:89] found id: ""
	I1217 12:02:39.070282 3219848 logs.go:282] 0 containers: []
	W1217 12:02:39.070291 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:39.070299 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:39.070311 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:39.086150 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:39.086228 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:39.158855 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:39.150093    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.151044    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.152764    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.153406    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.155059    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:39.150093    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.151044    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.152764    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.153406    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.155059    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:39.158917 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:39.158948 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:39.184120 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:39.184154 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:39.228401 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:39.228446 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:41.030449 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:02:41.099078 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:02:41.099186 3219848 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 12:02:41.102220 3219848 out.go:179] * Enabled addons: 
	I1217 12:02:41.105179 3219848 addons.go:530] duration metric: took 1m49.649331261s for enable addons: enabled=[]
	I1217 12:02:41.789011 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:41.800666 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:41.800741 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:41.831181 3219848 cri.go:89] found id: ""
	I1217 12:02:41.831214 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.831222 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:41.831229 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:41.831292 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:41.855868 3219848 cri.go:89] found id: ""
	I1217 12:02:41.855893 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.855901 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:41.855909 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:41.855970 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:41.880077 3219848 cri.go:89] found id: ""
	I1217 12:02:41.880102 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.880110 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:41.880117 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:41.880174 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:41.904526 3219848 cri.go:89] found id: ""
	I1217 12:02:41.904553 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.904562 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:41.904568 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:41.904630 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:41.930234 3219848 cri.go:89] found id: ""
	I1217 12:02:41.930257 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.930266 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:41.930272 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:41.930329 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:41.958809 3219848 cri.go:89] found id: ""
	I1217 12:02:41.958835 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.958844 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:41.958851 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:41.958909 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:41.983616 3219848 cri.go:89] found id: ""
	I1217 12:02:41.983642 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.983652 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:41.983658 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:41.983723 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:42.011680 3219848 cri.go:89] found id: ""
	I1217 12:02:42.011705 3219848 logs.go:282] 0 containers: []
	W1217 12:02:42.011714 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:42.011725 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:42.011736 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:42.073172 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:42.073215 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:42.092098 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:42.092139 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:42.170615 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:42.158978    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.160329    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.161071    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.163397    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.164052    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:42.158978    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.160329    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.161071    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.163397    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.164052    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:42.170644 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:42.170669 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:42.200096 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:42.200137 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:44.738108 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:44.751949 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:44.752049 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:44.785830 3219848 cri.go:89] found id: ""
	I1217 12:02:44.785869 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.785902 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:44.785911 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:44.785988 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:44.815102 3219848 cri.go:89] found id: ""
	I1217 12:02:44.815138 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.815148 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:44.815154 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:44.815256 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:44.843623 3219848 cri.go:89] found id: ""
	I1217 12:02:44.843658 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.843667 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:44.843674 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:44.843768 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:44.868589 3219848 cri.go:89] found id: ""
	I1217 12:02:44.868612 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.868620 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:44.868626 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:44.868710 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:44.893731 3219848 cri.go:89] found id: ""
	I1217 12:02:44.893757 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.893767 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:44.893774 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:44.893877 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:44.920703 3219848 cri.go:89] found id: ""
	I1217 12:02:44.920732 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.920741 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:44.920748 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:44.920807 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:44.945270 3219848 cri.go:89] found id: ""
	I1217 12:02:44.945307 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.945317 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:44.945323 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:44.945390 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:44.974571 3219848 cri.go:89] found id: ""
	I1217 12:02:44.974669 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.974693 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:44.974723 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:44.974767 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:45.011160 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:45.011262 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:45.135210 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:45.135297 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:45.172030 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:45.172125 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:45.299181 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:45.286225    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.288700    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.289610    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.291554    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.292270    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:45.286225    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.288700    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.289610    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.291554    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.292270    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:45.299256 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:45.299270 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:47.834408 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:47.845640 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:47.845713 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:47.875767 3219848 cri.go:89] found id: ""
	I1217 12:02:47.875793 3219848 logs.go:282] 0 containers: []
	W1217 12:02:47.875803 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:47.875809 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:47.875894 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:47.900760 3219848 cri.go:89] found id: ""
	I1217 12:02:47.900798 3219848 logs.go:282] 0 containers: []
	W1217 12:02:47.900808 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:47.900815 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:47.900916 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:47.925606 3219848 cri.go:89] found id: ""
	I1217 12:02:47.925640 3219848 logs.go:282] 0 containers: []
	W1217 12:02:47.925650 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:47.925656 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:47.925730 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:47.953896 3219848 cri.go:89] found id: ""
	I1217 12:02:47.953919 3219848 logs.go:282] 0 containers: []
	W1217 12:02:47.953928 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:47.953935 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:47.954003 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:47.979667 3219848 cri.go:89] found id: ""
	I1217 12:02:47.979736 3219848 logs.go:282] 0 containers: []
	W1217 12:02:47.979759 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:47.979780 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:47.979871 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:48.009398 3219848 cri.go:89] found id: ""
	I1217 12:02:48.009477 3219848 logs.go:282] 0 containers: []
	W1217 12:02:48.009502 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:48.009528 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:48.009630 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:48.039277 3219848 cri.go:89] found id: ""
	I1217 12:02:48.039349 3219848 logs.go:282] 0 containers: []
	W1217 12:02:48.039373 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:48.039400 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:48.039498 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:48.065115 3219848 cri.go:89] found id: ""
	I1217 12:02:48.065140 3219848 logs.go:282] 0 containers: []
	W1217 12:02:48.065151 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:48.065162 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:48.065175 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:48.081650 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:48.081680 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:48.149022 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:48.140864    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.141345    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.142918    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.143402    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.144920    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:48.140864    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.141345    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.142918    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.143402    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.144920    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:48.149046 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:48.149060 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:48.174962 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:48.174999 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:48.204617 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:48.204645 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:50.772582 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:50.784158 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:50.784228 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:50.814532 3219848 cri.go:89] found id: ""
	I1217 12:02:50.814555 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.814563 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:50.814569 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:50.814628 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:50.848966 3219848 cri.go:89] found id: ""
	I1217 12:02:50.848989 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.848997 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:50.849004 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:50.849066 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:50.873257 3219848 cri.go:89] found id: ""
	I1217 12:02:50.873284 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.873293 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:50.873300 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:50.873364 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:50.897538 3219848 cri.go:89] found id: ""
	I1217 12:02:50.897564 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.897573 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:50.897579 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:50.897638 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:50.922912 3219848 cri.go:89] found id: ""
	I1217 12:02:50.922937 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.922946 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:50.922953 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:50.923013 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:50.948094 3219848 cri.go:89] found id: ""
	I1217 12:02:50.948120 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.948129 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:50.948136 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:50.948196 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:50.974087 3219848 cri.go:89] found id: ""
	I1217 12:02:50.974114 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.974124 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:50.974131 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:50.974190 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:51.006127 3219848 cri.go:89] found id: ""
	I1217 12:02:51.006159 3219848 logs.go:282] 0 containers: []
	W1217 12:02:51.006169 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:51.006256 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:51.006275 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:51.032290 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:51.032323 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:51.063443 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:51.063469 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:51.119487 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:51.119523 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:51.138001 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:51.138031 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:51.208764 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:51.200371    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.201009    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.202548    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.202968    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.204568    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:51.200371    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.201009    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.202548    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.202968    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.204568    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:53.709691 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:53.720597 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:53.720678 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:53.745776 3219848 cri.go:89] found id: ""
	I1217 12:02:53.745802 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.745811 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:53.745819 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:53.745878 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:53.775989 3219848 cri.go:89] found id: ""
	I1217 12:02:53.776013 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.776021 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:53.776027 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:53.776098 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:53.810226 3219848 cri.go:89] found id: ""
	I1217 12:02:53.810253 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.810262 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:53.810269 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:53.810333 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:53.839758 3219848 cri.go:89] found id: ""
	I1217 12:02:53.839778 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.839787 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:53.839793 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:53.839857 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:53.864680 3219848 cri.go:89] found id: ""
	I1217 12:02:53.864745 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.864768 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:53.864788 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:53.864872 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:53.888540 3219848 cri.go:89] found id: ""
	I1217 12:02:53.888561 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.888569 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:53.888576 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:53.888640 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:53.912908 3219848 cri.go:89] found id: ""
	I1217 12:02:53.912973 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.912998 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:53.913015 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:53.913087 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:53.942233 3219848 cri.go:89] found id: ""
	I1217 12:02:53.942254 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.942263 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:53.942285 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:53.942300 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:53.998450 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:53.998485 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:54.017836 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:54.017867 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:54.086072 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:54.077439    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.078327    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.079921    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.080399    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.082101    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:54.077439    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.078327    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.079921    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.080399    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.082101    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:54.086097 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:54.086110 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:54.112391 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:54.112586 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:56.648110 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:56.658791 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:56.658863 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:56.685484 3219848 cri.go:89] found id: ""
	I1217 12:02:56.685508 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.685516 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:56.685526 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:56.685587 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:56.710064 3219848 cri.go:89] found id: ""
	I1217 12:02:56.710126 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.710141 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:56.710148 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:56.710219 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:56.735357 3219848 cri.go:89] found id: ""
	I1217 12:02:56.735383 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.735393 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:56.735404 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:56.735465 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:56.767684 3219848 cri.go:89] found id: ""
	I1217 12:02:56.767710 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.767724 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:56.767731 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:56.767792 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:56.809924 3219848 cri.go:89] found id: ""
	I1217 12:02:56.809951 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.809960 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:56.809968 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:56.810026 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:56.839853 3219848 cri.go:89] found id: ""
	I1217 12:02:56.839879 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.839889 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:56.839895 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:56.839956 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:56.866637 3219848 cri.go:89] found id: ""
	I1217 12:02:56.866663 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.866672 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:56.866679 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:56.866746 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:56.891828 3219848 cri.go:89] found id: ""
	I1217 12:02:56.891853 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.891862 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:56.891872 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:56.891885 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:56.948612 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:56.948652 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:56.964832 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:56.964864 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:57.035706 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:57.026894    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.027527    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.029280    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.030006    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.031607    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:57.026894    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.027527    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.029280    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.030006    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.031607    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:57.035725 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:57.035783 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:57.061297 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:57.061332 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:59.592887 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:59.603568 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:59.603647 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:59.628351 3219848 cri.go:89] found id: ""
	I1217 12:02:59.628378 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.628387 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:59.628395 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:59.628503 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:59.654358 3219848 cri.go:89] found id: ""
	I1217 12:02:59.654380 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.654388 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:59.654394 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:59.654456 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:59.679684 3219848 cri.go:89] found id: ""
	I1217 12:02:59.679703 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.679717 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:59.679723 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:59.679786 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:59.706460 3219848 cri.go:89] found id: ""
	I1217 12:02:59.706491 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.706501 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:59.706507 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:59.706570 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:59.736016 3219848 cri.go:89] found id: ""
	I1217 12:02:59.736041 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.736050 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:59.736057 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:59.736116 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:59.778297 3219848 cri.go:89] found id: ""
	I1217 12:02:59.778323 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.778332 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:59.778339 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:59.778404 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:59.809983 3219848 cri.go:89] found id: ""
	I1217 12:02:59.810009 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.810018 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:59.810025 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:59.810082 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:59.843076 3219848 cri.go:89] found id: ""
	I1217 12:02:59.843102 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.843110 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:59.843119 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:59.843131 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:59.902975 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:59.903012 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:59.918923 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:59.918958 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:59.987681 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:59.979645    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.980298    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.981764    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.982249    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.983739    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:59.979645    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.980298    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.981764    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.982249    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.983739    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:59.987704 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:59.987716 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:00.126179 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:00.128746 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:02.747342 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:02.759443 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:02.759536 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:02.813879 3219848 cri.go:89] found id: ""
	I1217 12:03:02.813907 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.813917 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:02.813924 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:02.813996 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:02.856869 3219848 cri.go:89] found id: ""
	I1217 12:03:02.856899 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.856908 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:02.856915 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:02.856973 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:02.883984 3219848 cri.go:89] found id: ""
	I1217 12:03:02.884015 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.884024 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:02.884031 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:02.884094 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:02.911584 3219848 cri.go:89] found id: ""
	I1217 12:03:02.911605 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.911613 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:02.911619 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:02.911677 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:02.941815 3219848 cri.go:89] found id: ""
	I1217 12:03:02.941837 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.941847 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:02.941853 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:02.941920 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:02.971949 3219848 cri.go:89] found id: ""
	I1217 12:03:02.971972 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.971980 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:02.971986 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:02.972045 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:02.997848 3219848 cri.go:89] found id: ""
	I1217 12:03:02.997875 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.997884 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:02.997891 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:02.997952 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:03.025293 3219848 cri.go:89] found id: ""
	I1217 12:03:03.025321 3219848 logs.go:282] 0 containers: []
	W1217 12:03:03.025330 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:03.025339 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:03.025353 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:03.095479 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:03.086357    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.087966    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.088719    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.089902    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.090320    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:03.086357    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.087966    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.088719    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.089902    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.090320    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:03.095503 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:03.095517 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:03.121627 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:03.121668 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:03.152132 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:03.152162 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:03.208671 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:03.208717 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:05.726193 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:05.737765 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:05.737842 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:05.803315 3219848 cri.go:89] found id: ""
	I1217 12:03:05.803338 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.803355 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:05.803364 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:05.803424 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:05.852889 3219848 cri.go:89] found id: ""
	I1217 12:03:05.852952 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.852967 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:05.852975 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:05.853035 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:05.885239 3219848 cri.go:89] found id: ""
	I1217 12:03:05.885263 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.885274 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:05.885281 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:05.885346 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:05.909571 3219848 cri.go:89] found id: ""
	I1217 12:03:05.909601 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.909610 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:05.909617 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:05.909683 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:05.944648 3219848 cri.go:89] found id: ""
	I1217 12:03:05.944714 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.944729 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:05.944742 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:05.944801 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:05.969671 3219848 cri.go:89] found id: ""
	I1217 12:03:05.969707 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.969716 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:05.969738 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:05.969819 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:05.994549 3219848 cri.go:89] found id: ""
	I1217 12:03:05.994575 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.994584 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:05.994590 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:05.994648 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:06.025175 3219848 cri.go:89] found id: ""
	I1217 12:03:06.025201 3219848 logs.go:282] 0 containers: []
	W1217 12:03:06.025212 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:06.025223 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:06.025255 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:06.094463 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:06.085807    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.086594    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.088396    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.089018    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.090252    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:06.085807    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.086594    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.088396    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.089018    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.090252    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:06.094488 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:06.094503 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:06.120857 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:06.120892 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:06.148825 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:06.148854 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:06.207501 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:06.207537 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:08.724013 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:08.734763 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:08.734854 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:08.797461 3219848 cri.go:89] found id: ""
	I1217 12:03:08.797536 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.797561 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:08.797583 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:08.797692 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:08.849950 3219848 cri.go:89] found id: ""
	I1217 12:03:08.850015 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.850031 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:08.850039 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:08.850099 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:08.876353 3219848 cri.go:89] found id: ""
	I1217 12:03:08.876378 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.876387 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:08.876394 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:08.876474 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:08.902743 3219848 cri.go:89] found id: ""
	I1217 12:03:08.902767 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.902776 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:08.902783 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:08.902847 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:08.928380 3219848 cri.go:89] found id: ""
	I1217 12:03:08.928405 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.928439 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:08.928447 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:08.928508 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:08.953372 3219848 cri.go:89] found id: ""
	I1217 12:03:08.953397 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.953406 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:08.953413 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:08.953481 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:08.977913 3219848 cri.go:89] found id: ""
	I1217 12:03:08.977935 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.977945 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:08.977951 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:08.978015 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:09.014088 3219848 cri.go:89] found id: ""
	I1217 12:03:09.014114 3219848 logs.go:282] 0 containers: []
	W1217 12:03:09.014123 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:09.014133 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:09.014144 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:09.069559 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:09.069599 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:09.085849 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:09.085877 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:09.153859 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:09.145727    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.146529    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.148157    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.148779    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.150028    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:09.145727    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.146529    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.148157    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.148779    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.150028    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:09.153879 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:09.153892 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:09.179067 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:09.179099 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:11.708448 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:11.719221 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:11.719291 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:11.744009 3219848 cri.go:89] found id: ""
	I1217 12:03:11.744033 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.744042 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:11.744048 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:11.744104 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:11.795640 3219848 cri.go:89] found id: ""
	I1217 12:03:11.795663 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.795671 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:11.795678 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:11.795739 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:11.851553 3219848 cri.go:89] found id: ""
	I1217 12:03:11.851573 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.851581 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:11.851587 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:11.851642 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:11.879197 3219848 cri.go:89] found id: ""
	I1217 12:03:11.879272 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.879294 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:11.879316 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:11.879432 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:11.904743 3219848 cri.go:89] found id: ""
	I1217 12:03:11.904816 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.904839 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:11.904864 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:11.904974 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:11.930378 3219848 cri.go:89] found id: ""
	I1217 12:03:11.930452 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.930482 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:11.930491 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:11.930562 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:11.955446 3219848 cri.go:89] found id: ""
	I1217 12:03:11.955475 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.955485 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:11.955492 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:11.955553 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:11.980056 3219848 cri.go:89] found id: ""
	I1217 12:03:11.980082 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.980092 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:11.980102 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:11.980113 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:12.039392 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:12.039430 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:12.055724 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:12.055752 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:12.120835 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:12.111964    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.112751    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.114462    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.114770    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.116985    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:12.111964    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.112751    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.114462    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.114770    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.116985    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:12.120858 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:12.120871 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:12.145568 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:12.145601 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:14.685252 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:14.695909 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:14.695982 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:14.722094 3219848 cri.go:89] found id: ""
	I1217 12:03:14.722116 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.722124 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:14.722131 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:14.722191 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:14.747765 3219848 cri.go:89] found id: ""
	I1217 12:03:14.747790 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.747799 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:14.747805 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:14.747863 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:14.832061 3219848 cri.go:89] found id: ""
	I1217 12:03:14.832086 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.832096 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:14.832103 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:14.832175 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:14.861589 3219848 cri.go:89] found id: ""
	I1217 12:03:14.861612 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.861621 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:14.861628 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:14.861687 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:14.887122 3219848 cri.go:89] found id: ""
	I1217 12:03:14.887144 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.887153 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:14.887160 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:14.887219 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:14.913961 3219848 cri.go:89] found id: ""
	I1217 12:03:14.913988 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.913996 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:14.914003 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:14.914063 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:14.940509 3219848 cri.go:89] found id: ""
	I1217 12:03:14.940539 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.940584 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:14.940599 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:14.940684 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:14.968190 3219848 cri.go:89] found id: ""
	I1217 12:03:14.968260 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.968286 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:14.968314 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:14.968341 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:15.025687 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:15.025728 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:15.048063 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:15.048161 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:15.120549 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:15.111260    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.111932    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.113791    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.114487    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.116204    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:15.111260    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.111932    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.113791    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.114487    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.116204    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:15.120575 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:15.120590 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:15.147374 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:15.147419 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:17.678613 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:17.689902 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:17.689996 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:17.715580 3219848 cri.go:89] found id: ""
	I1217 12:03:17.715617 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.715626 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:17.715634 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:17.715706 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:17.746656 3219848 cri.go:89] found id: ""
	I1217 12:03:17.746680 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.746689 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:17.746696 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:17.746757 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:17.777911 3219848 cri.go:89] found id: ""
	I1217 12:03:17.777981 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.778005 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:17.778031 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:17.778142 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:17.841621 3219848 cri.go:89] found id: ""
	I1217 12:03:17.841682 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.841714 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:17.841734 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:17.841839 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:17.874462 3219848 cri.go:89] found id: ""
	I1217 12:03:17.874536 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.874559 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:17.874573 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:17.874655 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:17.899519 3219848 cri.go:89] found id: ""
	I1217 12:03:17.899563 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.899573 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:17.899580 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:17.899654 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:17.925535 3219848 cri.go:89] found id: ""
	I1217 12:03:17.925559 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.925568 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:17.925574 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:17.925642 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:17.950672 3219848 cri.go:89] found id: ""
	I1217 12:03:17.950737 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.950761 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:17.950787 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:17.950826 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:18.006915 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:18.006964 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:18.024598 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:18.024632 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:18.093800 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:18.085487    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.086439    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.087176    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.088142    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.089680    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:18.085487    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.086439    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.087176    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.088142    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.089680    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:18.093830 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:18.093843 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:18.120115 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:18.120150 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:20.651699 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:20.662809 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:20.662885 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:20.692750 3219848 cri.go:89] found id: ""
	I1217 12:03:20.692772 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.692781 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:20.692787 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:20.692854 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:20.723234 3219848 cri.go:89] found id: ""
	I1217 12:03:20.723259 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.723267 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:20.723273 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:20.723334 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:20.749812 3219848 cri.go:89] found id: ""
	I1217 12:03:20.749833 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.749841 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:20.749847 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:20.749903 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:20.799186 3219848 cri.go:89] found id: ""
	I1217 12:03:20.799208 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.799216 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:20.799222 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:20.799280 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:20.850498 3219848 cri.go:89] found id: ""
	I1217 12:03:20.850573 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.850596 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:20.850617 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:20.850735 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:20.881588 3219848 cri.go:89] found id: ""
	I1217 12:03:20.881660 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.881682 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:20.881702 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:20.881790 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:20.911209 3219848 cri.go:89] found id: ""
	I1217 12:03:20.911275 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.911301 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:20.911316 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:20.911391 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:20.938447 3219848 cri.go:89] found id: ""
	I1217 12:03:20.938473 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.938483 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:20.938492 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:20.938503 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:20.995421 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:20.995463 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:21.013450 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:21.013483 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:21.084404 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:21.075746    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.076533    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.078205    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.078900    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.080479    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:21.075746    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.076533    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.078205    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.078900    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.080479    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:21.084449 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:21.084463 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:21.111296 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:21.111335 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:23.647949 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:23.658668 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:23.658737 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:23.685275 3219848 cri.go:89] found id: ""
	I1217 12:03:23.685298 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.685307 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:23.685314 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:23.685375 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:23.711416 3219848 cri.go:89] found id: ""
	I1217 12:03:23.711466 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.711478 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:23.711485 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:23.711549 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:23.738391 3219848 cri.go:89] found id: ""
	I1217 12:03:23.738418 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.738427 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:23.738433 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:23.738492 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:23.801227 3219848 cri.go:89] found id: ""
	I1217 12:03:23.801253 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.801262 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:23.801268 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:23.801327 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:23.837564 3219848 cri.go:89] found id: ""
	I1217 12:03:23.837585 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.837593 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:23.837600 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:23.837660 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:23.864057 3219848 cri.go:89] found id: ""
	I1217 12:03:23.864078 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.864086 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:23.864093 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:23.864159 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:23.888263 3219848 cri.go:89] found id: ""
	I1217 12:03:23.888289 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.888298 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:23.888305 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:23.888363 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:23.917533 3219848 cri.go:89] found id: ""
	I1217 12:03:23.917555 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.917564 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:23.917573 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:23.917584 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:23.946496 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:23.946525 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:24.003650 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:24.003697 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:24.022449 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:24.022482 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:24.093823 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:24.084998    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.085736    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.087440    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.088190    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.089867    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:24.084998    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.085736    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.087440    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.088190    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.089867    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:24.093845 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:24.093858 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:26.622844 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:26.634100 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:26.634173 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:26.662315 3219848 cri.go:89] found id: ""
	I1217 12:03:26.662341 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.662350 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:26.662357 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:26.662417 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:26.689598 3219848 cri.go:89] found id: ""
	I1217 12:03:26.689623 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.689633 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:26.689640 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:26.689704 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:26.716815 3219848 cri.go:89] found id: ""
	I1217 12:03:26.716841 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.716850 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:26.716858 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:26.716926 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:26.743338 3219848 cri.go:89] found id: ""
	I1217 12:03:26.743364 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.743375 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:26.743382 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:26.743447 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:26.799290 3219848 cri.go:89] found id: ""
	I1217 12:03:26.799326 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.799335 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:26.799342 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:26.799412 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:26.854473 3219848 cri.go:89] found id: ""
	I1217 12:03:26.854539 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.854555 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:26.854563 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:26.854625 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:26.880552 3219848 cri.go:89] found id: ""
	I1217 12:03:26.880581 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.880591 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:26.880598 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:26.880659 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:26.906009 3219848 cri.go:89] found id: ""
	I1217 12:03:26.906042 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.906052 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:26.906061 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:26.906072 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:26.971795 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:26.963736    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.964328    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.965821    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.966197    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.967699    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:26.963736    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.964328    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.965821    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.966197    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.967699    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:26.971818 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:26.971831 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:26.996929 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:26.996968 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:27.031442 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:27.031479 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:27.088296 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:27.088330 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:29.604978 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:29.615685 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:29.615754 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:29.642346 3219848 cri.go:89] found id: ""
	I1217 12:03:29.642375 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.642384 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:29.642391 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:29.642449 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:29.669188 3219848 cri.go:89] found id: ""
	I1217 12:03:29.669214 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.669223 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:29.669230 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:29.669293 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:29.695623 3219848 cri.go:89] found id: ""
	I1217 12:03:29.695648 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.695657 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:29.695663 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:29.695729 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:29.721447 3219848 cri.go:89] found id: ""
	I1217 12:03:29.721472 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.721482 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:29.721489 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:29.721551 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:29.746217 3219848 cri.go:89] found id: ""
	I1217 12:03:29.746244 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.746253 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:29.746261 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:29.746318 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:29.797088 3219848 cri.go:89] found id: ""
	I1217 12:03:29.797122 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.797131 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:29.797137 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:29.797210 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:29.845942 3219848 cri.go:89] found id: ""
	I1217 12:03:29.845962 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.845971 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:29.845977 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:29.846041 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:29.881686 3219848 cri.go:89] found id: ""
	I1217 12:03:29.881714 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.881723 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:29.881733 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:29.881745 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:29.938916 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:29.938949 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:29.954625 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:29.954702 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:30.048700 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:30.033826    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.034802    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.036344    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.036964    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.039023    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:30.033826    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.034802    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.036344    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.036964    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.039023    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:30.048776 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:30.048805 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:30.081544 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:30.081588 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:32.617502 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:32.628255 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:32.628328 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:32.653287 3219848 cri.go:89] found id: ""
	I1217 12:03:32.653314 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.653323 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:32.653331 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:32.653393 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:32.678914 3219848 cri.go:89] found id: ""
	I1217 12:03:32.678938 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.678946 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:32.678952 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:32.679013 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:32.705809 3219848 cri.go:89] found id: ""
	I1217 12:03:32.705835 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.705845 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:32.705852 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:32.705915 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:32.736249 3219848 cri.go:89] found id: ""
	I1217 12:03:32.736278 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.736294 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:32.736301 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:32.736382 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:32.777637 3219848 cri.go:89] found id: ""
	I1217 12:03:32.777666 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.777676 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:32.777684 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:32.777749 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:32.848686 3219848 cri.go:89] found id: ""
	I1217 12:03:32.848726 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.848735 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:32.848742 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:32.848811 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:32.877608 3219848 cri.go:89] found id: ""
	I1217 12:03:32.877633 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.877643 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:32.877650 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:32.877715 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:32.912387 3219848 cri.go:89] found id: ""
	I1217 12:03:32.912443 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.912453 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:32.912463 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:32.912478 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:32.973780 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:32.965664    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.966474    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.968080    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.968441    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.969916    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:32.965664    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.966474    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.968080    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.968441    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.969916    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:32.973802 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:32.973816 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:32.999779 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:32.999818 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:33.035424 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:33.035456 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:33.095096 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:33.095136 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:35.611791 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:35.625472 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:35.625546 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:35.656243 3219848 cri.go:89] found id: ""
	I1217 12:03:35.656265 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.656273 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:35.656280 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:35.656339 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:35.681938 3219848 cri.go:89] found id: ""
	I1217 12:03:35.681964 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.681972 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:35.681978 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:35.682038 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:35.711864 3219848 cri.go:89] found id: ""
	I1217 12:03:35.711887 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.711896 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:35.711902 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:35.711961 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:35.736900 3219848 cri.go:89] found id: ""
	I1217 12:03:35.736924 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.736932 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:35.736942 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:35.737002 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:35.796476 3219848 cri.go:89] found id: ""
	I1217 12:03:35.796553 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.796576 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:35.796598 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:35.796711 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:35.851385 3219848 cri.go:89] found id: ""
	I1217 12:03:35.851463 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.851487 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:35.851530 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:35.851627 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:35.879315 3219848 cri.go:89] found id: ""
	I1217 12:03:35.879388 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.879423 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:35.879447 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:35.879560 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:35.904369 3219848 cri.go:89] found id: ""
	I1217 12:03:35.904461 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.904485 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:35.904509 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:35.904539 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:35.962316 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:35.962358 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:35.978473 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:35.978503 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:36.048228 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:36.039946    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.040655    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.042240    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.042853    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.043967    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:36.039946    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.040655    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.042240    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.042853    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.043967    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:36.048254 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:36.048267 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:36.075099 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:36.075134 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:38.607418 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:38.618789 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:38.618869 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:38.646270 3219848 cri.go:89] found id: ""
	I1217 12:03:38.646297 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.646307 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:38.646315 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:38.646379 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:38.671906 3219848 cri.go:89] found id: ""
	I1217 12:03:38.671931 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.671940 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:38.671947 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:38.672012 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:38.696480 3219848 cri.go:89] found id: ""
	I1217 12:03:38.696504 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.696513 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:38.696520 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:38.696581 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:38.727000 3219848 cri.go:89] found id: ""
	I1217 12:03:38.727026 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.727036 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:38.727042 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:38.727114 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:38.782353 3219848 cri.go:89] found id: ""
	I1217 12:03:38.782381 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.782391 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:38.782398 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:38.782459 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:38.847087 3219848 cri.go:89] found id: ""
	I1217 12:03:38.847110 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.847118 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:38.847125 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:38.847183 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:38.874682 3219848 cri.go:89] found id: ""
	I1217 12:03:38.874704 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.874712 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:38.874718 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:38.874780 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:38.902269 3219848 cri.go:89] found id: ""
	I1217 12:03:38.902297 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.902306 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:38.902316 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:38.902331 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:38.967646 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:38.958671    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.959248    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.961005    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.961508    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.963014    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:38.958671    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.959248    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.961005    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.961508    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.963014    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:38.967671 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:38.967685 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:38.993086 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:38.993121 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:39.024046 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:39.024079 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:39.080928 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:39.080962 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:41.597202 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:41.608508 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:41.608582 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:41.634319 3219848 cri.go:89] found id: ""
	I1217 12:03:41.634344 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.634359 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:41.634366 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:41.634427 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:41.660053 3219848 cri.go:89] found id: ""
	I1217 12:03:41.660076 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.660085 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:41.660092 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:41.660159 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:41.686022 3219848 cri.go:89] found id: ""
	I1217 12:03:41.686047 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.686056 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:41.686062 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:41.686119 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:41.711689 3219848 cri.go:89] found id: ""
	I1217 12:03:41.711714 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.711723 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:41.711729 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:41.711798 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:41.738135 3219848 cri.go:89] found id: ""
	I1217 12:03:41.738161 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.738170 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:41.738177 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:41.738235 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:41.794953 3219848 cri.go:89] found id: ""
	I1217 12:03:41.794975 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.794984 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:41.794991 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:41.795051 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:41.832712 3219848 cri.go:89] found id: ""
	I1217 12:03:41.832747 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.832755 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:41.832762 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:41.832872 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:41.862947 3219848 cri.go:89] found id: ""
	I1217 12:03:41.862967 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.862976 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:41.862985 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:41.862996 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:41.888484 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:41.888519 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:41.919432 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:41.919461 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:41.979083 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:41.979117 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:41.995225 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:41.995256 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:42.068500 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:42.058178    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.059172    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.061060    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.061946    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.063848    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:42.058178    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.059172    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.061060    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.061946    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.063848    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:44.569152 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:44.579717 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:44.579791 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:44.604579 3219848 cri.go:89] found id: ""
	I1217 12:03:44.604605 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.604614 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:44.604621 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:44.604680 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:44.628954 3219848 cri.go:89] found id: ""
	I1217 12:03:44.628987 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.628997 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:44.629004 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:44.629066 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:44.657345 3219848 cri.go:89] found id: ""
	I1217 12:03:44.657372 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.657381 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:44.657388 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:44.657445 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:44.682960 3219848 cri.go:89] found id: ""
	I1217 12:03:44.682983 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.683000 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:44.683007 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:44.683066 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:44.712406 3219848 cri.go:89] found id: ""
	I1217 12:03:44.712451 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.712461 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:44.712468 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:44.712526 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:44.737929 3219848 cri.go:89] found id: ""
	I1217 12:03:44.737952 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.737961 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:44.737967 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:44.738027 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:44.778893 3219848 cri.go:89] found id: ""
	I1217 12:03:44.778921 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.778930 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:44.778938 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:44.779003 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:44.818695 3219848 cri.go:89] found id: ""
	I1217 12:03:44.818724 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.818733 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:44.818742 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:44.818754 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:44.888711 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:44.888748 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:44.905193 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:44.905224 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:44.969126 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:44.960653    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.961469    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.963160    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.963503    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.964997    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:44.960653    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.961469    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.963160    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.963503    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.964997    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:44.969149 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:44.969162 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:44.995233 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:44.995272 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:47.580853 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:47.591106 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:47.591173 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:47.616262 3219848 cri.go:89] found id: ""
	I1217 12:03:47.616294 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.616304 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:47.616317 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:47.616384 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:47.641674 3219848 cri.go:89] found id: ""
	I1217 12:03:47.641702 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.641712 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:47.641718 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:47.641778 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:47.667191 3219848 cri.go:89] found id: ""
	I1217 12:03:47.667215 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.667224 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:47.667230 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:47.667296 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:47.696304 3219848 cri.go:89] found id: ""
	I1217 12:03:47.696332 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.696341 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:47.696349 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:47.696412 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:47.726109 3219848 cri.go:89] found id: ""
	I1217 12:03:47.726134 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.726143 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:47.726149 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:47.726212 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:47.762878 3219848 cri.go:89] found id: ""
	I1217 12:03:47.762904 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.762914 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:47.762920 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:47.762977 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:47.824894 3219848 cri.go:89] found id: ""
	I1217 12:03:47.824932 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.824957 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:47.824973 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:47.825056 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:47.851816 3219848 cri.go:89] found id: ""
	I1217 12:03:47.851852 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.851861 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:47.851888 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:47.851907 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:47.908314 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:47.908352 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:47.924222 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:47.924250 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:47.986251 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:47.978126    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.978646    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.980334    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.980816    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.982319    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:47.978126    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.978646    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.980334    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.980816    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.982319    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:47.986276 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:47.986290 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:48.010815 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:48.010855 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:50.542164 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:50.553364 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:50.553437 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:50.581389 3219848 cri.go:89] found id: ""
	I1217 12:03:50.581423 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.581432 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:50.581439 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:50.581508 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:50.610382 3219848 cri.go:89] found id: ""
	I1217 12:03:50.610405 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.610413 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:50.610422 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:50.610482 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:50.636111 3219848 cri.go:89] found id: ""
	I1217 12:03:50.636137 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.636147 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:50.636153 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:50.636218 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:50.661308 3219848 cri.go:89] found id: ""
	I1217 12:03:50.661334 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.661342 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:50.661350 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:50.661415 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:50.688144 3219848 cri.go:89] found id: ""
	I1217 12:03:50.688172 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.688181 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:50.688187 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:50.688251 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:50.715059 3219848 cri.go:89] found id: ""
	I1217 12:03:50.715087 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.715096 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:50.715103 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:50.715165 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:50.745229 3219848 cri.go:89] found id: ""
	I1217 12:03:50.745253 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.745262 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:50.745269 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:50.745330 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:50.793705 3219848 cri.go:89] found id: ""
	I1217 12:03:50.793735 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.793743 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:50.793752 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:50.793763 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:50.876190 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:50.876229 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:50.893552 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:50.893581 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:50.960907 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:50.951439    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.952410    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.954030    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.954408    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.956833    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:50.951439    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.952410    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.954030    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.954408    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.956833    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:50.960928 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:50.960942 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:50.986454 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:50.986485 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:53.522123 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:53.533167 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:53.533246 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:53.558553 3219848 cri.go:89] found id: ""
	I1217 12:03:53.558580 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.558589 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:53.558596 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:53.558668 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:53.586267 3219848 cri.go:89] found id: ""
	I1217 12:03:53.586295 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.586305 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:53.586318 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:53.586383 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:53.613148 3219848 cri.go:89] found id: ""
	I1217 12:03:53.613174 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.613183 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:53.613190 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:53.613251 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:53.639336 3219848 cri.go:89] found id: ""
	I1217 12:03:53.639371 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.639381 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:53.639387 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:53.639452 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:53.664632 3219848 cri.go:89] found id: ""
	I1217 12:03:53.664700 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.664730 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:53.664745 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:53.664820 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:53.689663 3219848 cri.go:89] found id: ""
	I1217 12:03:53.689733 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.689760 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:53.689774 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:53.689851 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:53.714636 3219848 cri.go:89] found id: ""
	I1217 12:03:53.714707 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.714733 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:53.714747 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:53.714827 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:53.744583 3219848 cri.go:89] found id: ""
	I1217 12:03:53.744610 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.744620 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:53.744629 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:53.744640 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:53.833845 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:53.833884 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:53.853606 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:53.853632 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:53.921245 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:53.912685    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.913171    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.914992    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.915543    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.917157    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:53.912685    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.913171    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.914992    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.915543    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.917157    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:53.921269 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:53.921282 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:53.946578 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:53.946611 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:56.477034 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:56.488539 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:56.488622 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:56.514320 3219848 cri.go:89] found id: ""
	I1217 12:03:56.514347 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.514356 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:56.514363 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:56.514426 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:56.540629 3219848 cri.go:89] found id: ""
	I1217 12:03:56.540668 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.540676 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:56.540687 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:56.540752 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:56.571552 3219848 cri.go:89] found id: ""
	I1217 12:03:56.571586 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.571595 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:56.571602 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:56.571725 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:56.598758 3219848 cri.go:89] found id: ""
	I1217 12:03:56.598835 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.598858 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:56.598878 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:56.598964 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:56.624629 3219848 cri.go:89] found id: ""
	I1217 12:03:56.624659 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.624668 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:56.624675 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:56.624736 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:56.650192 3219848 cri.go:89] found id: ""
	I1217 12:03:56.650214 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.650222 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:56.650229 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:56.650286 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:56.675523 3219848 cri.go:89] found id: ""
	I1217 12:03:56.675548 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.675557 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:56.675563 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:56.675651 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:56.701703 3219848 cri.go:89] found id: ""
	I1217 12:03:56.701731 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.701740 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:56.701751 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:56.701762 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:56.717844 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:56.717877 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:56.837097 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:56.823600    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.824395    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.827077    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.827764    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.829468    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:56.823600    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.824395    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.827077    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.827764    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.829468    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:56.837160 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:56.837195 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:56.864759 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:56.864792 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:56.892589 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:56.892615 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:59.450097 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:59.460573 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:59.460649 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:59.484966 3219848 cri.go:89] found id: ""
	I1217 12:03:59.484992 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.485001 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:59.485007 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:59.485073 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:59.509519 3219848 cri.go:89] found id: ""
	I1217 12:03:59.509545 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.509554 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:59.509561 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:59.509619 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:59.535238 3219848 cri.go:89] found id: ""
	I1217 12:03:59.535307 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.535331 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:59.535351 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:59.535443 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:59.561799 3219848 cri.go:89] found id: ""
	I1217 12:03:59.561823 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.561832 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:59.561839 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:59.561898 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:59.587394 3219848 cri.go:89] found id: ""
	I1217 12:03:59.587416 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.587425 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:59.587431 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:59.587489 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:59.614672 3219848 cri.go:89] found id: ""
	I1217 12:03:59.614695 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.614704 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:59.614712 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:59.614774 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:59.641144 3219848 cri.go:89] found id: ""
	I1217 12:03:59.641171 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.641180 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:59.641187 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:59.641251 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:59.667139 3219848 cri.go:89] found id: ""
	I1217 12:03:59.667167 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.667176 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:59.667184 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:59.667196 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:59.725056 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:59.725091 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:59.741510 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:59.741593 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:59.858554 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:59.849895    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.850546    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.852238    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.852841    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.854544    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:59.849895    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.850546    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.852238    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.852841    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.854544    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:59.858578 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:59.858592 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:59.884457 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:59.884492 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:02.413040 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:02.426774 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:02.426848 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:02.456484 3219848 cri.go:89] found id: ""
	I1217 12:04:02.456587 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.456601 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:02.456609 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:02.456706 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:02.485434 3219848 cri.go:89] found id: ""
	I1217 12:04:02.485506 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.485531 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:02.485547 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:02.485622 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:02.512063 3219848 cri.go:89] found id: ""
	I1217 12:04:02.512100 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.512109 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:02.512116 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:02.512195 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:02.538362 3219848 cri.go:89] found id: ""
	I1217 12:04:02.538433 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.538454 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:02.538462 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:02.538525 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:02.567959 3219848 cri.go:89] found id: ""
	I1217 12:04:02.567994 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.568003 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:02.568009 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:02.568077 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:02.594823 3219848 cri.go:89] found id: ""
	I1217 12:04:02.594860 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.594869 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:02.594876 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:02.594950 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:02.625125 3219848 cri.go:89] found id: ""
	I1217 12:04:02.625196 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.625211 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:02.625219 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:02.625282 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:02.650998 3219848 cri.go:89] found id: ""
	I1217 12:04:02.651033 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.651042 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:02.651051 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:02.651062 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:02.676950 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:02.676984 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:02.711118 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:02.711144 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:02.774152 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:02.774233 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:02.794787 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:02.794862 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:02.886703 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:02.878272    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.878713    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.880270    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.880830    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.882492    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:02.878272    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.878713    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.880270    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.880830    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.882492    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:05.386993 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:05.398225 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:05.398299 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:05.426294 3219848 cri.go:89] found id: ""
	I1217 12:04:05.426321 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.426330 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:05.426337 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:05.426399 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:05.451004 3219848 cri.go:89] found id: ""
	I1217 12:04:05.451027 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.451036 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:05.451049 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:05.451112 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:05.476504 3219848 cri.go:89] found id: ""
	I1217 12:04:05.476532 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.476542 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:05.476549 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:05.476607 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:05.506001 3219848 cri.go:89] found id: ""
	I1217 12:04:05.506028 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.506036 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:05.506043 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:05.506103 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:05.531776 3219848 cri.go:89] found id: ""
	I1217 12:04:05.531803 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.531813 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:05.531820 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:05.531878 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:05.558040 3219848 cri.go:89] found id: ""
	I1217 12:04:05.558068 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.558078 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:05.558085 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:05.558149 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:05.582988 3219848 cri.go:89] found id: ""
	I1217 12:04:05.583024 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.583033 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:05.583040 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:05.583115 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:05.609687 3219848 cri.go:89] found id: ""
	I1217 12:04:05.609725 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.609734 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:05.609744 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:05.609756 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:05.677594 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:05.668798    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.669411    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.671028    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.671605    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.673145    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:05.668798    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.669411    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.671028    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.671605    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.673145    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:05.677661 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:05.677689 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:05.704024 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:05.704062 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:05.736880 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:05.736906 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:05.810417 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:05.810457 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:08.343493 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:08.353931 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:08.354001 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:08.377982 3219848 cri.go:89] found id: ""
	I1217 12:04:08.378050 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.378062 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:08.378069 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:08.378160 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:08.402837 3219848 cri.go:89] found id: ""
	I1217 12:04:08.402870 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.402880 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:08.402886 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:08.402956 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:08.430641 3219848 cri.go:89] found id: ""
	I1217 12:04:08.430666 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.430675 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:08.430682 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:08.430747 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:08.455904 3219848 cri.go:89] found id: ""
	I1217 12:04:08.455937 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.455947 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:08.455954 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:08.456020 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:08.480357 3219848 cri.go:89] found id: ""
	I1217 12:04:08.480388 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.480398 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:08.480405 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:08.480506 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:08.505595 3219848 cri.go:89] found id: ""
	I1217 12:04:08.505629 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.505682 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:08.505701 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:08.505765 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:08.531028 3219848 cri.go:89] found id: ""
	I1217 12:04:08.531065 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.531074 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:08.531081 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:08.531156 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:08.559015 3219848 cri.go:89] found id: ""
	I1217 12:04:08.559051 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.559060 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:08.559069 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:08.559081 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:08.574853 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:08.574883 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:08.640119 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:08.631556    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.632320    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.634049    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.634630    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.635699    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:08.631556    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.632320    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.634049    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.634630    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.635699    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:08.640141 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:08.640154 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:08.666054 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:08.666091 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:08.694523 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:08.694553 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:11.260393 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:11.271847 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:11.271939 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:11.297537 3219848 cri.go:89] found id: ""
	I1217 12:04:11.297559 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.297568 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:11.297574 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:11.297669 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:11.326252 3219848 cri.go:89] found id: ""
	I1217 12:04:11.326279 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.326288 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:11.326295 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:11.326354 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:11.354965 3219848 cri.go:89] found id: ""
	I1217 12:04:11.354991 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.355013 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:11.355020 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:11.355085 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:11.379623 3219848 cri.go:89] found id: ""
	I1217 12:04:11.379649 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.379657 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:11.379664 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:11.379730 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:11.405089 3219848 cri.go:89] found id: ""
	I1217 12:04:11.405157 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.405185 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:11.405200 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:11.405276 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:11.431039 3219848 cri.go:89] found id: ""
	I1217 12:04:11.431064 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.431073 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:11.431079 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:11.431138 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:11.456294 3219848 cri.go:89] found id: ""
	I1217 12:04:11.456329 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.456338 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:11.456345 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:11.456437 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:11.485568 3219848 cri.go:89] found id: ""
	I1217 12:04:11.485595 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.485604 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:11.485613 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:11.485628 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:11.542231 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:11.542268 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:11.559119 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:11.559201 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:11.628507 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:11.619906    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.620667    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.622406    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.622904    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.624511    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:11.619906    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.620667    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.622406    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.622904    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.624511    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:11.628580 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:11.628617 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:11.654658 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:11.654692 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:14.187317 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:14.200950 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:14.201028 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:14.225871 3219848 cri.go:89] found id: ""
	I1217 12:04:14.225907 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.225917 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:14.225924 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:14.225982 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:14.255169 3219848 cri.go:89] found id: ""
	I1217 12:04:14.255194 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.255203 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:14.255210 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:14.255270 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:14.279884 3219848 cri.go:89] found id: ""
	I1217 12:04:14.279914 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.279928 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:14.279935 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:14.279993 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:14.303876 3219848 cri.go:89] found id: ""
	I1217 12:04:14.303902 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.303911 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:14.303918 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:14.303982 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:14.329876 3219848 cri.go:89] found id: ""
	I1217 12:04:14.329902 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.329911 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:14.329924 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:14.329993 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:14.355681 3219848 cri.go:89] found id: ""
	I1217 12:04:14.355707 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.355723 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:14.355730 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:14.355791 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:14.380557 3219848 cri.go:89] found id: ""
	I1217 12:04:14.380582 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.380591 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:14.380607 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:14.380669 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:14.406559 3219848 cri.go:89] found id: ""
	I1217 12:04:14.406626 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.406652 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:14.406671 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:14.406684 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:14.435535 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:14.435567 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:14.496057 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:14.496100 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:14.512036 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:14.512068 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:14.581215 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:14.571459    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.572243    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.574240    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.574925    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.576493    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:14.571459    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.572243    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.574240    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.574925    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.576493    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:14.581280 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:14.581299 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:17.108603 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:17.119638 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:17.119710 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:17.144879 3219848 cri.go:89] found id: ""
	I1217 12:04:17.144901 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.144909 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:17.144915 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:17.144976 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:17.169341 3219848 cri.go:89] found id: ""
	I1217 12:04:17.169366 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.169375 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:17.169381 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:17.169440 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:17.193770 3219848 cri.go:89] found id: ""
	I1217 12:04:17.193792 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.193800 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:17.193806 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:17.193867 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:17.218766 3219848 cri.go:89] found id: ""
	I1217 12:04:17.218788 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.218797 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:17.218804 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:17.218911 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:17.246745 3219848 cri.go:89] found id: ""
	I1217 12:04:17.246768 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.246777 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:17.246783 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:17.246844 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:17.271877 3219848 cri.go:89] found id: ""
	I1217 12:04:17.271898 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.271907 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:17.271914 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:17.271971 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:17.296098 3219848 cri.go:89] found id: ""
	I1217 12:04:17.296124 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.296133 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:17.296140 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:17.296202 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:17.321740 3219848 cri.go:89] found id: ""
	I1217 12:04:17.321767 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.321777 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:17.321788 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:17.321799 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:17.378911 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:17.378944 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:17.395425 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:17.395454 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:17.458148 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:17.450570    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.450926    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.452495    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.452908    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.454301    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:17.450570    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.450926    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.452495    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.452908    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.454301    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:17.458172 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:17.458185 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:17.483130 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:17.483199 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:20.011622 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:20.036129 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:20.036210 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:20.069785 3219848 cri.go:89] found id: ""
	I1217 12:04:20.069812 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.069820 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:20.069826 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:20.069891 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:20.118138 3219848 cri.go:89] found id: ""
	I1217 12:04:20.118165 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.118174 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:20.118180 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:20.118287 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:20.145219 3219848 cri.go:89] found id: ""
	I1217 12:04:20.145246 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.145267 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:20.145274 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:20.145340 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:20.171515 3219848 cri.go:89] found id: ""
	I1217 12:04:20.171541 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.171549 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:20.171556 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:20.171615 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:20.198371 3219848 cri.go:89] found id: ""
	I1217 12:04:20.198393 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.198409 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:20.198416 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:20.198476 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:20.226505 3219848 cri.go:89] found id: ""
	I1217 12:04:20.226529 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.226538 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:20.226544 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:20.226604 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:20.251848 3219848 cri.go:89] found id: ""
	I1217 12:04:20.251874 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.251883 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:20.251890 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:20.251951 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:20.281838 3219848 cri.go:89] found id: ""
	I1217 12:04:20.281863 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.281872 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:20.281887 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:20.281899 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:20.344875 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:20.336196    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.336887    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.338603    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.339150    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.340924    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:20.336196    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.336887    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.338603    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.339150    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.340924    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:20.344897 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:20.344909 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:20.370205 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:20.370244 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:20.403171 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:20.403203 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:20.459306 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:20.459342 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:22.976954 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:22.987706 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:22.987785 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:23.048240 3219848 cri.go:89] found id: ""
	I1217 12:04:23.048267 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.048276 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:23.048282 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:23.048342 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:23.098972 3219848 cri.go:89] found id: ""
	I1217 12:04:23.099001 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.099041 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:23.099055 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:23.099142 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:23.130170 3219848 cri.go:89] found id: ""
	I1217 12:04:23.130192 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.130201 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:23.130207 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:23.130266 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:23.157897 3219848 cri.go:89] found id: ""
	I1217 12:04:23.157919 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.157927 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:23.157933 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:23.157990 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:23.186732 3219848 cri.go:89] found id: ""
	I1217 12:04:23.186757 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.186766 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:23.186772 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:23.186834 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:23.211252 3219848 cri.go:89] found id: ""
	I1217 12:04:23.211278 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.211287 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:23.211294 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:23.211360 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:23.235484 3219848 cri.go:89] found id: ""
	I1217 12:04:23.235507 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.235516 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:23.235523 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:23.235593 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:23.263167 3219848 cri.go:89] found id: ""
	I1217 12:04:23.263195 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.263204 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:23.263213 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:23.263224 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:23.319468 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:23.319503 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:23.335277 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:23.335309 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:23.401412 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:23.393032    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.393444    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.395045    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.395905    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.397587    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:23.393032    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.393444    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.395045    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.395905    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.397587    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:23.401435 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:23.401447 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:23.427002 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:23.427042 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:25.955964 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:25.966813 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:25.966907 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:25.991674 3219848 cri.go:89] found id: ""
	I1217 12:04:25.991698 3219848 logs.go:282] 0 containers: []
	W1217 12:04:25.991707 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:25.991714 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:25.991828 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:26.043851 3219848 cri.go:89] found id: ""
	I1217 12:04:26.043878 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.043888 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:26.043895 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:26.043963 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:26.099675 3219848 cri.go:89] found id: ""
	I1217 12:04:26.099700 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.099708 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:26.099714 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:26.099786 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:26.129744 3219848 cri.go:89] found id: ""
	I1217 12:04:26.129768 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.129776 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:26.129783 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:26.129849 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:26.155393 3219848 cri.go:89] found id: ""
	I1217 12:04:26.155420 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.155428 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:26.155434 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:26.155492 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:26.182178 3219848 cri.go:89] found id: ""
	I1217 12:04:26.182200 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.182209 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:26.182216 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:26.182277 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:26.206976 3219848 cri.go:89] found id: ""
	I1217 12:04:26.207000 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.207009 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:26.207015 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:26.207072 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:26.231357 3219848 cri.go:89] found id: ""
	I1217 12:04:26.231383 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.231391 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:26.231400 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:26.231411 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:26.287609 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:26.287646 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:26.303654 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:26.303701 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:26.372084 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:26.363097    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.363759    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.365390    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.366039    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.367715    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:26.363097    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.363759    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.365390    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.366039    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.367715    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:26.372107 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:26.372122 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:26.398349 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:26.398386 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:28.926935 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:28.938567 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:28.938637 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:28.965018 3219848 cri.go:89] found id: ""
	I1217 12:04:28.965042 3219848 logs.go:282] 0 containers: []
	W1217 12:04:28.965050 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:28.965056 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:28.965116 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:28.993619 3219848 cri.go:89] found id: ""
	I1217 12:04:28.993646 3219848 logs.go:282] 0 containers: []
	W1217 12:04:28.993654 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:28.993661 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:28.993723 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:29.042253 3219848 cri.go:89] found id: ""
	I1217 12:04:29.042274 3219848 logs.go:282] 0 containers: []
	W1217 12:04:29.042282 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:29.042289 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:29.042347 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:29.109464 3219848 cri.go:89] found id: ""
	I1217 12:04:29.109486 3219848 logs.go:282] 0 containers: []
	W1217 12:04:29.109495 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:29.109501 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:29.109563 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:29.139820 3219848 cri.go:89] found id: ""
	I1217 12:04:29.139842 3219848 logs.go:282] 0 containers: []
	W1217 12:04:29.139850 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:29.139857 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:29.139917 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:29.165440 3219848 cri.go:89] found id: ""
	I1217 12:04:29.165465 3219848 logs.go:282] 0 containers: []
	W1217 12:04:29.165474 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:29.165481 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:29.165543 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:29.191572 3219848 cri.go:89] found id: ""
	I1217 12:04:29.191597 3219848 logs.go:282] 0 containers: []
	W1217 12:04:29.191606 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:29.191613 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:29.191673 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:29.217986 3219848 cri.go:89] found id: ""
	I1217 12:04:29.218011 3219848 logs.go:282] 0 containers: []
	W1217 12:04:29.218020 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:29.218030 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:29.218041 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:29.274933 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:29.274967 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:29.290733 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:29.290760 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:29.358661 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:29.349933    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.350736    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.352310    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.352861    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.354377    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:29.349933    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.350736    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.352310    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.352861    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.354377    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:29.358683 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:29.358697 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:29.385070 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:29.385107 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:31.914639 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:31.928018 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:31.928092 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:31.955140 3219848 cri.go:89] found id: ""
	I1217 12:04:31.955163 3219848 logs.go:282] 0 containers: []
	W1217 12:04:31.955171 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:31.955178 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:31.955252 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:31.982332 3219848 cri.go:89] found id: ""
	I1217 12:04:31.982364 3219848 logs.go:282] 0 containers: []
	W1217 12:04:31.982380 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:31.982387 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:31.982448 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:32.045708 3219848 cri.go:89] found id: ""
	I1217 12:04:32.045731 3219848 logs.go:282] 0 containers: []
	W1217 12:04:32.045740 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:32.045746 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:32.045805 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:32.093198 3219848 cri.go:89] found id: ""
	I1217 12:04:32.093220 3219848 logs.go:282] 0 containers: []
	W1217 12:04:32.093229 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:32.093242 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:32.093301 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:32.120574 3219848 cri.go:89] found id: ""
	I1217 12:04:32.120641 3219848 logs.go:282] 0 containers: []
	W1217 12:04:32.120664 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:32.120684 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:32.120772 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:32.151069 3219848 cri.go:89] found id: ""
	I1217 12:04:32.151137 3219848 logs.go:282] 0 containers: []
	W1217 12:04:32.151160 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:32.151182 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:32.151272 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:32.181226 3219848 cri.go:89] found id: ""
	I1217 12:04:32.181303 3219848 logs.go:282] 0 containers: []
	W1217 12:04:32.181326 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:32.181347 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:32.181439 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:32.207237 3219848 cri.go:89] found id: ""
	I1217 12:04:32.207295 3219848 logs.go:282] 0 containers: []
	W1217 12:04:32.207310 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:32.207324 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:32.207336 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:32.263771 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:32.263808 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:32.279666 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:32.279693 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:32.345645 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:32.336846    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.338076    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.338956    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.340462    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.340816    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:32.336846    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.338076    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.338956    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.340462    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.340816    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:32.345666 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:32.345679 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:32.371311 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:32.371347 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:34.899829 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:34.911276 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:34.911354 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:34.936056 3219848 cri.go:89] found id: ""
	I1217 12:04:34.936080 3219848 logs.go:282] 0 containers: []
	W1217 12:04:34.936089 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:34.936096 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:34.936156 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:34.962166 3219848 cri.go:89] found id: ""
	I1217 12:04:34.962192 3219848 logs.go:282] 0 containers: []
	W1217 12:04:34.962201 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:34.962207 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:34.962271 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:34.987891 3219848 cri.go:89] found id: ""
	I1217 12:04:34.987916 3219848 logs.go:282] 0 containers: []
	W1217 12:04:34.987926 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:34.987934 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:34.987994 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:35.036291 3219848 cri.go:89] found id: ""
	I1217 12:04:35.036319 3219848 logs.go:282] 0 containers: []
	W1217 12:04:35.036331 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:35.036339 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:35.036402 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:35.091997 3219848 cri.go:89] found id: ""
	I1217 12:04:35.092023 3219848 logs.go:282] 0 containers: []
	W1217 12:04:35.092041 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:35.092049 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:35.092119 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:35.126699 3219848 cri.go:89] found id: ""
	I1217 12:04:35.126721 3219848 logs.go:282] 0 containers: []
	W1217 12:04:35.126736 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:35.126743 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:35.126802 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:35.152052 3219848 cri.go:89] found id: ""
	I1217 12:04:35.152077 3219848 logs.go:282] 0 containers: []
	W1217 12:04:35.152087 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:35.152094 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:35.152156 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:35.177868 3219848 cri.go:89] found id: ""
	I1217 12:04:35.177897 3219848 logs.go:282] 0 containers: []
	W1217 12:04:35.177906 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:35.177916 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:35.177955 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:35.213172 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:35.213200 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:35.269771 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:35.269807 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:35.285802 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:35.285841 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:35.355953 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:35.345556    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.346168    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.348018    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.348336    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.351480    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:35.345556    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.346168    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.348018    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.348336    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.351480    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:35.355976 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:35.355988 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:37.883397 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:37.894032 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:37.894101 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:37.927040 3219848 cri.go:89] found id: ""
	I1217 12:04:37.927066 3219848 logs.go:282] 0 containers: []
	W1217 12:04:37.927075 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:37.927085 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:37.927150 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:37.951890 3219848 cri.go:89] found id: ""
	I1217 12:04:37.951916 3219848 logs.go:282] 0 containers: []
	W1217 12:04:37.951925 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:37.951931 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:37.951995 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:37.978258 3219848 cri.go:89] found id: ""
	I1217 12:04:37.978286 3219848 logs.go:282] 0 containers: []
	W1217 12:04:37.978295 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:37.978302 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:37.978383 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:38.032665 3219848 cri.go:89] found id: ""
	I1217 12:04:38.032689 3219848 logs.go:282] 0 containers: []
	W1217 12:04:38.032698 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:38.032705 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:38.032770 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:38.068588 3219848 cri.go:89] found id: ""
	I1217 12:04:38.068617 3219848 logs.go:282] 0 containers: []
	W1217 12:04:38.068626 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:38.068633 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:38.068703 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:38.111074 3219848 cri.go:89] found id: ""
	I1217 12:04:38.111102 3219848 logs.go:282] 0 containers: []
	W1217 12:04:38.111112 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:38.111119 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:38.111183 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:38.139962 3219848 cri.go:89] found id: ""
	I1217 12:04:38.139989 3219848 logs.go:282] 0 containers: []
	W1217 12:04:38.139998 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:38.140005 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:38.140071 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:38.165120 3219848 cri.go:89] found id: ""
	I1217 12:04:38.165147 3219848 logs.go:282] 0 containers: []
	W1217 12:04:38.165156 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:38.165165 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:38.165176 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:38.221183 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:38.221218 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:38.237532 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:38.237565 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:38.307341 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:38.299115    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.299933    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.301496    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.301856    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.303152    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:38.299115    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.299933    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.301496    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.301856    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.303152    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:38.307362 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:38.307376 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:38.333705 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:38.333739 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:40.864326 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:40.875421 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:40.875500 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:40.900554 3219848 cri.go:89] found id: ""
	I1217 12:04:40.900576 3219848 logs.go:282] 0 containers: []
	W1217 12:04:40.900586 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:40.900592 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:40.900654 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:40.926107 3219848 cri.go:89] found id: ""
	I1217 12:04:40.926134 3219848 logs.go:282] 0 containers: []
	W1217 12:04:40.926143 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:40.926151 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:40.926210 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:40.951315 3219848 cri.go:89] found id: ""
	I1217 12:04:40.951341 3219848 logs.go:282] 0 containers: []
	W1217 12:04:40.951350 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:40.951356 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:40.951414 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:40.976682 3219848 cri.go:89] found id: ""
	I1217 12:04:40.976713 3219848 logs.go:282] 0 containers: []
	W1217 12:04:40.976723 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:40.976731 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:40.976790 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:41.016365 3219848 cri.go:89] found id: ""
	I1217 12:04:41.016388 3219848 logs.go:282] 0 containers: []
	W1217 12:04:41.016396 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:41.016403 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:41.016527 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:41.081810 3219848 cri.go:89] found id: ""
	I1217 12:04:41.081838 3219848 logs.go:282] 0 containers: []
	W1217 12:04:41.081848 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:41.081856 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:41.081915 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:41.107919 3219848 cri.go:89] found id: ""
	I1217 12:04:41.107946 3219848 logs.go:282] 0 containers: []
	W1217 12:04:41.107955 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:41.107962 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:41.108032 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:41.134563 3219848 cri.go:89] found id: ""
	I1217 12:04:41.134589 3219848 logs.go:282] 0 containers: []
	W1217 12:04:41.134599 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:41.134608 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:41.134619 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:41.192325 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:41.192362 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:41.208694 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:41.208723 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:41.279184 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:41.267762    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.268617    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.272813    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.273616    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.275116    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:41.267762    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.268617    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.272813    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.273616    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.275116    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:41.279207 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:41.279221 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:41.305398 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:41.305436 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:43.838273 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:43.849251 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:43.849321 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:43.873599 3219848 cri.go:89] found id: ""
	I1217 12:04:43.873671 3219848 logs.go:282] 0 containers: []
	W1217 12:04:43.873686 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:43.873694 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:43.873756 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:43.902353 3219848 cri.go:89] found id: ""
	I1217 12:04:43.902378 3219848 logs.go:282] 0 containers: []
	W1217 12:04:43.902388 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:43.902395 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:43.902486 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:43.928175 3219848 cri.go:89] found id: ""
	I1217 12:04:43.928202 3219848 logs.go:282] 0 containers: []
	W1217 12:04:43.928213 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:43.928220 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:43.928334 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:43.956883 3219848 cri.go:89] found id: ""
	I1217 12:04:43.956912 3219848 logs.go:282] 0 containers: []
	W1217 12:04:43.956921 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:43.956927 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:43.956996 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:43.982931 3219848 cri.go:89] found id: ""
	I1217 12:04:43.982968 3219848 logs.go:282] 0 containers: []
	W1217 12:04:43.982979 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:43.982986 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:43.983053 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:44.030268 3219848 cri.go:89] found id: ""
	I1217 12:04:44.030294 3219848 logs.go:282] 0 containers: []
	W1217 12:04:44.030304 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:44.030311 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:44.030388 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:44.082991 3219848 cri.go:89] found id: ""
	I1217 12:04:44.083021 3219848 logs.go:282] 0 containers: []
	W1217 12:04:44.083042 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:44.083049 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:44.083140 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:44.113120 3219848 cri.go:89] found id: ""
	I1217 12:04:44.113165 3219848 logs.go:282] 0 containers: []
	W1217 12:04:44.113175 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:44.113185 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:44.113204 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:44.172933 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:44.172970 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:44.189039 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:44.189066 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:44.257898 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:44.249336    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.250068    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.251815    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.252362    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.254049    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:44.249336    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.250068    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.251815    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.252362    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.254049    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:44.257924 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:44.257937 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:44.283680 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:44.283715 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:46.821352 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:46.832441 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:46.832520 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:46.858364 3219848 cri.go:89] found id: ""
	I1217 12:04:46.858390 3219848 logs.go:282] 0 containers: []
	W1217 12:04:46.858400 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:46.858407 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:46.858488 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:46.882836 3219848 cri.go:89] found id: ""
	I1217 12:04:46.882868 3219848 logs.go:282] 0 containers: []
	W1217 12:04:46.882876 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:46.882883 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:46.882952 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:46.907815 3219848 cri.go:89] found id: ""
	I1217 12:04:46.907852 3219848 logs.go:282] 0 containers: []
	W1217 12:04:46.907861 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:46.907888 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:46.907972 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:46.933329 3219848 cri.go:89] found id: ""
	I1217 12:04:46.933353 3219848 logs.go:282] 0 containers: []
	W1217 12:04:46.933363 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:46.933377 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:46.933445 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:46.959520 3219848 cri.go:89] found id: ""
	I1217 12:04:46.959546 3219848 logs.go:282] 0 containers: []
	W1217 12:04:46.959555 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:46.959562 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:46.959621 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:46.986527 3219848 cri.go:89] found id: ""
	I1217 12:04:46.986551 3219848 logs.go:282] 0 containers: []
	W1217 12:04:46.986561 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:46.986567 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:46.986627 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:47.034742 3219848 cri.go:89] found id: ""
	I1217 12:04:47.034765 3219848 logs.go:282] 0 containers: []
	W1217 12:04:47.034775 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:47.034781 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:47.034838 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:47.072115 3219848 cri.go:89] found id: ""
	I1217 12:04:47.072143 3219848 logs.go:282] 0 containers: []
	W1217 12:04:47.072152 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:47.072161 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:47.072173 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:47.138106 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:47.138141 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:47.156338 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:47.156381 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:47.224864 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:47.215946    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.216453    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.218361    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.219127    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.220895    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:47.215946    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.216453    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.218361    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.219127    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.220895    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:47.224889 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:47.224900 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:47.250608 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:47.250644 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:49.780985 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:49.791927 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:49.792002 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:49.817502 3219848 cri.go:89] found id: ""
	I1217 12:04:49.817526 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.817536 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:49.817542 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:49.817621 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:49.844464 3219848 cri.go:89] found id: ""
	I1217 12:04:49.844490 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.844499 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:49.844506 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:49.844614 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:49.874956 3219848 cri.go:89] found id: ""
	I1217 12:04:49.874982 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.874991 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:49.874998 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:49.875079 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:49.904772 3219848 cri.go:89] found id: ""
	I1217 12:04:49.904795 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.904804 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:49.904810 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:49.904872 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:49.934337 3219848 cri.go:89] found id: ""
	I1217 12:04:49.934362 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.934372 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:49.934379 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:49.934472 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:49.959338 3219848 cri.go:89] found id: ""
	I1217 12:04:49.959362 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.959371 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:49.959378 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:49.959481 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:49.984578 3219848 cri.go:89] found id: ""
	I1217 12:04:49.984606 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.984614 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:49.984621 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:49.984679 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:50.043309 3219848 cri.go:89] found id: ""
	I1217 12:04:50.043395 3219848 logs.go:282] 0 containers: []
	W1217 12:04:50.043419 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:50.043456 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:50.043486 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:50.135752 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:50.127538    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.128073    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.129753    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.130124    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.131773    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:50.127538    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.128073    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.129753    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.130124    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.131773    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:50.135777 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:50.135792 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:50.162030 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:50.162067 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:50.196447 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:50.196478 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:50.254281 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:50.254318 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:52.772408 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:52.783553 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:52.783633 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:52.820008 3219848 cri.go:89] found id: ""
	I1217 12:04:52.820043 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.820058 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:52.820065 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:52.820129 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:52.844905 3219848 cri.go:89] found id: ""
	I1217 12:04:52.844941 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.844949 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:52.844956 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:52.845029 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:52.869543 3219848 cri.go:89] found id: ""
	I1217 12:04:52.869569 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.869586 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:52.869622 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:52.869698 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:52.894131 3219848 cri.go:89] found id: ""
	I1217 12:04:52.894160 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.894170 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:52.894177 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:52.894266 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:52.921694 3219848 cri.go:89] found id: ""
	I1217 12:04:52.921719 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.921729 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:52.921736 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:52.921795 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:52.947377 3219848 cri.go:89] found id: ""
	I1217 12:04:52.947411 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.947421 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:52.947452 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:52.947531 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:52.972742 3219848 cri.go:89] found id: ""
	I1217 12:04:52.972768 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.972777 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:52.972787 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:52.972866 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:53.016484 3219848 cri.go:89] found id: ""
	I1217 12:04:53.016566 3219848 logs.go:282] 0 containers: []
	W1217 12:04:53.016588 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:53.016612 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:53.016657 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:53.091083 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:53.091153 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:53.109051 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:53.109075 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:53.174985 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:53.166259    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.167099    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.168974    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.169308    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.170842    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:53.166259    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.167099    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.168974    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.169308    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.170842    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:53.175008 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:53.175021 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:53.201645 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:53.201680 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:55.729262 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:55.742969 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:55.743043 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:55.772352 3219848 cri.go:89] found id: ""
	I1217 12:04:55.772374 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.772383 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:55.772389 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:55.772461 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:55.799085 3219848 cri.go:89] found id: ""
	I1217 12:04:55.799111 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.799120 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:55.799126 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:55.799191 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:55.825805 3219848 cri.go:89] found id: ""
	I1217 12:04:55.825830 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.825839 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:55.825846 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:55.825907 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:55.855875 3219848 cri.go:89] found id: ""
	I1217 12:04:55.855964 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.855979 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:55.855987 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:55.856055 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:55.881512 3219848 cri.go:89] found id: ""
	I1217 12:04:55.881539 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.881548 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:55.881555 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:55.881615 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:55.911117 3219848 cri.go:89] found id: ""
	I1217 12:04:55.911149 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.911158 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:55.911165 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:55.911236 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:55.936738 3219848 cri.go:89] found id: ""
	I1217 12:04:55.936774 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.936783 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:55.936790 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:55.936865 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:55.962878 3219848 cri.go:89] found id: ""
	I1217 12:04:55.962904 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.962918 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:55.962937 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:55.962950 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:55.991943 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:55.991988 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:56.062887 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:56.062922 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:56.129315 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:56.129356 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:56.145986 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:56.146013 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:56.214623 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:56.205795    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.206560    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.208125    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.208507    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.210116    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:56.205795    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.206560    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.208125    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.208507    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.210116    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:58.715974 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:58.727395 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:58.727466 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:58.751936 3219848 cri.go:89] found id: ""
	I1217 12:04:58.751961 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.751970 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:58.751977 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:58.752036 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:58.778416 3219848 cri.go:89] found id: ""
	I1217 12:04:58.778439 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.778447 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:58.778454 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:58.778517 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:58.806136 3219848 cri.go:89] found id: ""
	I1217 12:04:58.806160 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.806169 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:58.806175 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:58.806233 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:58.835276 3219848 cri.go:89] found id: ""
	I1217 12:04:58.835311 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.835321 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:58.835328 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:58.835396 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:58.862517 3219848 cri.go:89] found id: ""
	I1217 12:04:58.862596 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.862612 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:58.862620 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:58.862695 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:58.888027 3219848 cri.go:89] found id: ""
	I1217 12:04:58.888055 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.888065 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:58.888072 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:58.888156 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:58.913027 3219848 cri.go:89] found id: ""
	I1217 12:04:58.913106 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.913123 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:58.913132 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:58.913210 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:58.938554 3219848 cri.go:89] found id: ""
	I1217 12:04:58.938578 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.938587 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:58.938599 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:58.938611 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:58.995142 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:58.995175 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:59.026309 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:59.026388 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:59.124135 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:59.115677    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.117093    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.117405    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.118755    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.119195    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:59.115677    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.117093    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.117405    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.118755    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.119195    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:59.124157 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:59.124170 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:59.149882 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:59.149925 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:01.680518 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:01.692630 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:01.692709 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:01.721621 3219848 cri.go:89] found id: ""
	I1217 12:05:01.721647 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.721656 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:01.721664 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:01.721731 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:01.748186 3219848 cri.go:89] found id: ""
	I1217 12:05:01.748213 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.748232 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:01.748239 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:01.748310 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:01.774670 3219848 cri.go:89] found id: ""
	I1217 12:05:01.774694 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.774703 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:01.774709 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:01.774770 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:01.800533 3219848 cri.go:89] found id: ""
	I1217 12:05:01.800609 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.800635 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:01.800649 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:01.800726 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:01.832193 3219848 cri.go:89] found id: ""
	I1217 12:05:01.832221 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.832230 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:01.832238 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:01.832314 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:01.859699 3219848 cri.go:89] found id: ""
	I1217 12:05:01.859733 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.859743 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:01.859750 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:01.859825 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:01.891844 3219848 cri.go:89] found id: ""
	I1217 12:05:01.891869 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.891893 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:01.891901 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:01.891988 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:01.922765 3219848 cri.go:89] found id: ""
	I1217 12:05:01.922791 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.922801 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:01.922811 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:01.922821 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:01.984618 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:01.984654 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:02.003531 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:02.003573 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:02.119039 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:02.109047    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.109431    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.111503    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.112283    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.113931    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:02.109047    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.109431    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.111503    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.112283    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.113931    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:02.119062 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:02.119074 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:02.145052 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:02.145090 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:04.675110 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:04.686658 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:04.686731 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:04.714143 3219848 cri.go:89] found id: ""
	I1217 12:05:04.714169 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.714178 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:04.714185 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:04.714246 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:04.741446 3219848 cri.go:89] found id: ""
	I1217 12:05:04.741472 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.741481 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:04.741488 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:04.741549 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:04.771197 3219848 cri.go:89] found id: ""
	I1217 12:05:04.771224 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.771234 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:04.771241 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:04.771305 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:04.801798 3219848 cri.go:89] found id: ""
	I1217 12:05:04.801824 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.801834 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:04.801840 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:04.801901 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:04.827212 3219848 cri.go:89] found id: ""
	I1217 12:05:04.827240 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.827249 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:04.827257 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:04.827322 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:04.852794 3219848 cri.go:89] found id: ""
	I1217 12:05:04.852821 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.852831 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:04.852838 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:04.852898 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:04.879034 3219848 cri.go:89] found id: ""
	I1217 12:05:04.879058 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.879069 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:04.879075 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:04.879134 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:04.904782 3219848 cri.go:89] found id: ""
	I1217 12:05:04.904806 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.904814 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:04.904823 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:04.904833 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:04.961550 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:04.961581 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:04.977831 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:04.977861 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:05.101127 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:05.083862    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.093276    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.094908    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.095507    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.097102    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:05.083862    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.093276    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.094908    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.095507    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.097102    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:05.101155 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:05.101168 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:05.128517 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:05.128550 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:07.660217 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:07.670837 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:07.670907 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:07.696773 3219848 cri.go:89] found id: ""
	I1217 12:05:07.696800 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.696809 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:07.696816 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:07.696873 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:07.722665 3219848 cri.go:89] found id: ""
	I1217 12:05:07.722688 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.722697 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:07.722703 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:07.722770 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:07.748882 3219848 cri.go:89] found id: ""
	I1217 12:05:07.748907 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.748916 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:07.748922 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:07.748983 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:07.777951 3219848 cri.go:89] found id: ""
	I1217 12:05:07.777976 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.777985 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:07.777992 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:07.778052 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:07.807386 3219848 cri.go:89] found id: ""
	I1217 12:05:07.807414 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.807423 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:07.807430 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:07.807492 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:07.836910 3219848 cri.go:89] found id: ""
	I1217 12:05:07.836938 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.836947 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:07.836954 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:07.837012 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:07.861301 3219848 cri.go:89] found id: ""
	I1217 12:05:07.861327 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.861337 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:07.861343 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:07.861402 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:07.885389 3219848 cri.go:89] found id: ""
	I1217 12:05:07.885412 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.885422 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:07.885431 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:07.885444 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:07.940922 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:07.940954 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:07.956764 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:07.956792 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:08.040092 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:08.023500    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.024045    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.030582    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.031321    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.035763    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:08.023500    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.024045    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.030582    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.031321    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.035763    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:08.040167 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:08.040195 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:08.076595 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:08.076674 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:10.614548 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:10.625273 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:10.625344 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:10.649744 3219848 cri.go:89] found id: ""
	I1217 12:05:10.649774 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.649782 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:10.649789 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:10.649847 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:10.673909 3219848 cri.go:89] found id: ""
	I1217 12:05:10.673936 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.673945 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:10.673952 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:10.674010 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:10.699817 3219848 cri.go:89] found id: ""
	I1217 12:05:10.699840 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.699849 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:10.699855 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:10.699914 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:10.724608 3219848 cri.go:89] found id: ""
	I1217 12:05:10.724630 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.724638 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:10.724645 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:10.724702 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:10.756858 3219848 cri.go:89] found id: ""
	I1217 12:05:10.756883 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.756892 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:10.756899 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:10.756959 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:10.787011 3219848 cri.go:89] found id: ""
	I1217 12:05:10.787037 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.787046 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:10.787052 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:10.787111 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:10.816658 3219848 cri.go:89] found id: ""
	I1217 12:05:10.816683 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.816691 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:10.816698 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:10.816757 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:10.841817 3219848 cri.go:89] found id: ""
	I1217 12:05:10.841882 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.841899 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:10.841909 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:10.841920 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:10.899952 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:10.899994 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:10.915585 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:10.915615 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:10.983597 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:10.975197    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.975853    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.977490    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.978147    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.979658    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:10.975197    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.975853    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.977490    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.978147    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.979658    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:10.983619 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:10.983636 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:11.013827 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:11.013865 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:13.590017 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:13.601224 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:13.601300 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:13.630748 3219848 cri.go:89] found id: ""
	I1217 12:05:13.630771 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.630781 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:13.630788 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:13.630845 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:13.659125 3219848 cri.go:89] found id: ""
	I1217 12:05:13.659150 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.659160 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:13.659166 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:13.659224 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:13.689040 3219848 cri.go:89] found id: ""
	I1217 12:05:13.689066 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.689075 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:13.689082 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:13.689149 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:13.713917 3219848 cri.go:89] found id: ""
	I1217 12:05:13.713941 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.713949 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:13.713956 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:13.714016 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:13.738663 3219848 cri.go:89] found id: ""
	I1217 12:05:13.738686 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.738695 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:13.738701 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:13.738759 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:13.762897 3219848 cri.go:89] found id: ""
	I1217 12:05:13.762922 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.762931 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:13.762938 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:13.762995 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:13.791695 3219848 cri.go:89] found id: ""
	I1217 12:05:13.791720 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.791736 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:13.791743 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:13.791800 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:13.821207 3219848 cri.go:89] found id: ""
	I1217 12:05:13.821230 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.821239 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:13.821248 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:13.821259 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:13.848837 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:13.848867 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:13.906239 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:13.906278 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:13.921882 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:13.921917 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:13.991574 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:13.982172    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.983086    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.985111    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.985659    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.986629    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:13.982172    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.983086    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.985111    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.985659    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.986629    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:13.991596 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:13.991609 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:16.525032 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:16.535486 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:16.535556 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:16.561699 3219848 cri.go:89] found id: ""
	I1217 12:05:16.561721 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.561730 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:16.561736 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:16.561792 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:16.586264 3219848 cri.go:89] found id: ""
	I1217 12:05:16.586287 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.586296 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:16.586303 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:16.586360 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:16.611385 3219848 cri.go:89] found id: ""
	I1217 12:05:16.611409 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.611418 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:16.611425 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:16.611485 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:16.636230 3219848 cri.go:89] found id: ""
	I1217 12:05:16.636256 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.636267 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:16.636274 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:16.636332 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:16.660919 3219848 cri.go:89] found id: ""
	I1217 12:05:16.660942 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.660950 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:16.660956 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:16.661013 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:16.688962 3219848 cri.go:89] found id: ""
	I1217 12:05:16.688987 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.688996 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:16.689003 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:16.689070 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:16.719405 3219848 cri.go:89] found id: ""
	I1217 12:05:16.719428 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.719437 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:16.719443 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:16.719502 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:16.745166 3219848 cri.go:89] found id: ""
	I1217 12:05:16.745192 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.745201 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:16.745211 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:16.745223 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:16.771975 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:16.772014 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:16.804149 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:16.804180 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:16.861212 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:16.861249 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:16.877226 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:16.877257 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:16.943896 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:16.935292    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.935946    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.937663    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.938200    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.939861    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:16.935292    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.935946    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.937663    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.938200    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.939861    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:19.444922 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:19.455525 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:19.455598 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:19.480970 3219848 cri.go:89] found id: ""
	I1217 12:05:19.480995 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.481006 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:19.481017 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:19.481079 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:19.506235 3219848 cri.go:89] found id: ""
	I1217 12:05:19.506258 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.506267 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:19.506274 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:19.506333 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:19.532063 3219848 cri.go:89] found id: ""
	I1217 12:05:19.532086 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.532095 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:19.532105 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:19.532165 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:19.562427 3219848 cri.go:89] found id: ""
	I1217 12:05:19.562450 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.562460 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:19.562466 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:19.562524 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:19.587869 3219848 cri.go:89] found id: ""
	I1217 12:05:19.587903 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.587912 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:19.587919 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:19.587990 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:19.612889 3219848 cri.go:89] found id: ""
	I1217 12:05:19.612916 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.612925 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:19.612932 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:19.612990 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:19.637949 3219848 cri.go:89] found id: ""
	I1217 12:05:19.637972 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.637980 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:19.637992 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:19.638053 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:19.666633 3219848 cri.go:89] found id: ""
	I1217 12:05:19.666703 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.666740 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:19.666769 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:19.666798 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:19.726394 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:19.726430 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:19.742581 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:19.742662 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:19.807145 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:19.798144    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.799463    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.800143    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.801047    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.802652    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:19.798144    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.799463    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.800143    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.801047    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.802652    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:19.807174 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:19.807187 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:19.832758 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:19.832792 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:22.366107 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:22.376592 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:22.376666 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:22.401822 3219848 cri.go:89] found id: ""
	I1217 12:05:22.401847 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.401857 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:22.401863 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:22.401921 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:22.425903 3219848 cri.go:89] found id: ""
	I1217 12:05:22.425927 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.425936 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:22.425943 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:22.426008 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:22.454459 3219848 cri.go:89] found id: ""
	I1217 12:05:22.454484 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.454493 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:22.454499 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:22.454559 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:22.479178 3219848 cri.go:89] found id: ""
	I1217 12:05:22.479202 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.479212 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:22.479219 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:22.479276 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:22.505859 3219848 cri.go:89] found id: ""
	I1217 12:05:22.505885 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.505900 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:22.505908 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:22.505995 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:22.531485 3219848 cri.go:89] found id: ""
	I1217 12:05:22.531506 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.531515 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:22.531523 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:22.531583 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:22.558267 3219848 cri.go:89] found id: ""
	I1217 12:05:22.558343 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.558360 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:22.558367 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:22.558427 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:22.588380 3219848 cri.go:89] found id: ""
	I1217 12:05:22.588431 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.588442 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:22.588451 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:22.588463 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:22.647590 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:22.647629 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:22.665568 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:22.665597 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:22.738273 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:22.729900    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.730477    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.732137    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.732564    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.734423    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:22.729900    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.730477    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.732137    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.732564    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.734423    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:22.738298 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:22.738310 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:22.764468 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:22.764503 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:25.296756 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:25.320288 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:25.320356 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:25.347938 3219848 cri.go:89] found id: ""
	I1217 12:05:25.347959 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.347967 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:25.347973 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:25.348030 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:25.375407 3219848 cri.go:89] found id: ""
	I1217 12:05:25.375428 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.375438 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:25.375444 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:25.375501 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:25.400165 3219848 cri.go:89] found id: ""
	I1217 12:05:25.400187 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.400195 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:25.400202 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:25.400266 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:25.428203 3219848 cri.go:89] found id: ""
	I1217 12:05:25.428229 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.428240 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:25.428247 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:25.428307 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:25.454651 3219848 cri.go:89] found id: ""
	I1217 12:05:25.454675 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.454685 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:25.454692 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:25.454754 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:25.478961 3219848 cri.go:89] found id: ""
	I1217 12:05:25.478987 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.478996 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:25.479003 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:25.479088 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:25.508637 3219848 cri.go:89] found id: ""
	I1217 12:05:25.508661 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.508670 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:25.508676 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:25.508782 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:25.534245 3219848 cri.go:89] found id: ""
	I1217 12:05:25.534270 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.534279 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:25.534289 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:25.534306 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:25.569632 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:25.569662 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:25.625748 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:25.625783 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:25.641383 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:25.641409 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:25.709135 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:25.700728    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.701299    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.703107    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.703541    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.705177    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:25.700728    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.701299    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.703107    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.703541    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.705177    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:25.709156 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:25.709168 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:28.233802 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:28.244795 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:28.244872 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:28.291390 3219848 cri.go:89] found id: ""
	I1217 12:05:28.291412 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.291421 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:28.291427 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:28.291488 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:28.355887 3219848 cri.go:89] found id: ""
	I1217 12:05:28.355909 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.355917 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:28.355924 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:28.355983 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:28.381610 3219848 cri.go:89] found id: ""
	I1217 12:05:28.381633 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.381641 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:28.381647 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:28.381707 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:28.407516 3219848 cri.go:89] found id: ""
	I1217 12:05:28.407544 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.407553 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:28.407560 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:28.407622 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:28.436914 3219848 cri.go:89] found id: ""
	I1217 12:05:28.436982 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.437006 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:28.437021 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:28.437098 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:28.461189 3219848 cri.go:89] found id: ""
	I1217 12:05:28.461258 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.461283 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:28.461298 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:28.461373 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:28.490913 3219848 cri.go:89] found id: ""
	I1217 12:05:28.490948 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.490958 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:28.490965 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:28.491033 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:28.521566 3219848 cri.go:89] found id: ""
	I1217 12:05:28.521589 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.521599 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:28.521610 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:28.521622 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:28.577123 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:28.577159 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:28.593088 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:28.593119 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:28.655447 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:28.646846   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.647657   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.649169   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.649707   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.651192   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:28.646846   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.647657   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.649169   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.649707   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.651192   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:28.655472 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:28.655484 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:28.680532 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:28.680566 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:31.213979 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:31.224716 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:31.224784 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:31.257050 3219848 cri.go:89] found id: ""
	I1217 12:05:31.257071 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.257079 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:31.257085 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:31.257141 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:31.315656 3219848 cri.go:89] found id: ""
	I1217 12:05:31.315677 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.315686 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:31.315692 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:31.315746 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:31.349340 3219848 cri.go:89] found id: ""
	I1217 12:05:31.349360 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.349369 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:31.349375 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:31.349432 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:31.374728 3219848 cri.go:89] found id: ""
	I1217 12:05:31.374755 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.374764 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:31.374771 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:31.374833 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:31.401386 3219848 cri.go:89] found id: ""
	I1217 12:05:31.401422 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.401432 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:31.401439 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:31.401511 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:31.427234 3219848 cri.go:89] found id: ""
	I1217 12:05:31.427260 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.427270 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:31.427277 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:31.427338 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:31.452628 3219848 cri.go:89] found id: ""
	I1217 12:05:31.452666 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.452676 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:31.452684 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:31.452756 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:31.476684 3219848 cri.go:89] found id: ""
	I1217 12:05:31.476717 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.476725 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:31.476735 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:31.476745 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:31.533895 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:31.533928 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:31.549405 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:31.549433 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:31.617988 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:31.609983   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.610706   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.612313   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.612915   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.613854   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:31.609983   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.610706   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.612313   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.612915   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.613854   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:31.618022 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:31.618051 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:31.643544 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:31.643575 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:34.173214 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:34.183798 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:34.183881 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:34.208274 3219848 cri.go:89] found id: ""
	I1217 12:05:34.208299 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.208309 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:34.208315 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:34.208377 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:34.232844 3219848 cri.go:89] found id: ""
	I1217 12:05:34.232870 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.232879 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:34.232886 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:34.232947 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:34.298630 3219848 cri.go:89] found id: ""
	I1217 12:05:34.298656 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.298665 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:34.298672 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:34.298732 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:34.352614 3219848 cri.go:89] found id: ""
	I1217 12:05:34.352657 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.352672 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:34.352679 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:34.352745 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:34.378134 3219848 cri.go:89] found id: ""
	I1217 12:05:34.378156 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.378165 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:34.378171 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:34.378234 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:34.402637 3219848 cri.go:89] found id: ""
	I1217 12:05:34.402660 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.402668 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:34.402675 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:34.402758 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:34.428834 3219848 cri.go:89] found id: ""
	I1217 12:05:34.428906 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.428941 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:34.428948 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:34.429006 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:34.459618 3219848 cri.go:89] found id: ""
	I1217 12:05:34.459641 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.459654 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:34.459663 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:34.459674 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:34.514834 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:34.514867 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:34.531691 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:34.531717 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:34.603404 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:34.594905   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.595653   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.597085   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.597664   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.599260   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:34.594905   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.595653   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.597085   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.597664   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.599260   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:34.603478 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:34.603498 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:34.629092 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:34.629131 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:37.158533 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:37.170305 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:37.170377 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:37.195895 3219848 cri.go:89] found id: ""
	I1217 12:05:37.195920 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.195929 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:37.195936 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:37.195994 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:37.221126 3219848 cri.go:89] found id: ""
	I1217 12:05:37.221153 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.221162 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:37.221170 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:37.221228 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:37.246560 3219848 cri.go:89] found id: ""
	I1217 12:05:37.246584 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.246593 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:37.246600 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:37.246663 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:37.289595 3219848 cri.go:89] found id: ""
	I1217 12:05:37.289620 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.289629 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:37.289635 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:37.289707 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:37.324903 3219848 cri.go:89] found id: ""
	I1217 12:05:37.324923 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.324932 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:37.324939 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:37.324997 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:37.361173 3219848 cri.go:89] found id: ""
	I1217 12:05:37.361194 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.361204 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:37.361210 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:37.361269 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:37.389438 3219848 cri.go:89] found id: ""
	I1217 12:05:37.389461 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.389470 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:37.389476 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:37.389537 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:37.414662 3219848 cri.go:89] found id: ""
	I1217 12:05:37.414700 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.414710 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:37.414719 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:37.414731 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:37.478614 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:37.471157   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.471613   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.473091   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.473470   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.474881   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:37.471157   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.471613   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.473091   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.473470   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.474881   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:37.478647 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:37.478661 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:37.504204 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:37.504241 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:37.535207 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:37.535282 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:37.594334 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:37.594382 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:40.110392 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:40.122282 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:40.122363 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:40.148146 3219848 cri.go:89] found id: ""
	I1217 12:05:40.148171 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.148180 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:40.148186 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:40.148248 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:40.175122 3219848 cri.go:89] found id: ""
	I1217 12:05:40.175149 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.175158 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:40.175164 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:40.175224 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:40.201606 3219848 cri.go:89] found id: ""
	I1217 12:05:40.201629 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.201638 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:40.201644 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:40.201702 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:40.227663 3219848 cri.go:89] found id: ""
	I1217 12:05:40.227688 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.227697 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:40.227704 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:40.227760 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:40.279855 3219848 cri.go:89] found id: ""
	I1217 12:05:40.279881 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.279889 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:40.279896 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:40.279955 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:40.341349 3219848 cri.go:89] found id: ""
	I1217 12:05:40.341372 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.341381 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:40.341388 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:40.341445 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:40.366250 3219848 cri.go:89] found id: ""
	I1217 12:05:40.366276 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.366285 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:40.366292 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:40.366374 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:40.390064 3219848 cri.go:89] found id: ""
	I1217 12:05:40.390091 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.390100 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:40.390112 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:40.390143 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:40.417840 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:40.417866 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:40.474223 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:40.474260 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:40.489995 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:40.490025 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:40.558792 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:40.550389   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.551049   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.552779   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.553286   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.554780   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:40.550389   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.551049   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.552779   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.553286   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.554780   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:40.558816 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:40.558829 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:43.085654 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:43.096719 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:43.096788 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:43.122755 3219848 cri.go:89] found id: ""
	I1217 12:05:43.122822 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.122846 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:43.122862 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:43.122942 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:43.149072 3219848 cri.go:89] found id: ""
	I1217 12:05:43.149097 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.149106 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:43.149113 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:43.149192 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:43.175863 3219848 cri.go:89] found id: ""
	I1217 12:05:43.175889 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.175897 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:43.175929 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:43.176015 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:43.202533 3219848 cri.go:89] found id: ""
	I1217 12:05:43.202572 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.202580 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:43.202587 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:43.202649 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:43.227233 3219848 cri.go:89] found id: ""
	I1217 12:05:43.227307 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.227331 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:43.227352 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:43.227449 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:43.293609 3219848 cri.go:89] found id: ""
	I1217 12:05:43.293677 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.293701 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:43.293723 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:43.293807 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:43.345463 3219848 cri.go:89] found id: ""
	I1217 12:05:43.345537 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.345563 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:43.345584 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:43.345692 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:43.376719 3219848 cri.go:89] found id: ""
	I1217 12:05:43.376754 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.376763 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:43.376772 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:43.376785 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:43.434376 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:43.434408 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:43.449996 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:43.450023 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:43.518159 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:43.509135   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.509732   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.511431   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.511909   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.513376   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:43.509135   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.509732   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.511431   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.511909   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.513376   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:43.518179 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:43.518193 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:43.544448 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:43.544487 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:46.079862 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:46.091017 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:46.091085 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:46.116886 3219848 cri.go:89] found id: ""
	I1217 12:05:46.116913 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.116924 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:46.116939 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:46.117008 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:46.142188 3219848 cri.go:89] found id: ""
	I1217 12:05:46.142216 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.142227 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:46.142234 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:46.142296 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:46.168033 3219848 cri.go:89] found id: ""
	I1217 12:05:46.168059 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.168068 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:46.168075 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:46.168141 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:46.194149 3219848 cri.go:89] found id: ""
	I1217 12:05:46.194178 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.194188 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:46.194197 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:46.194257 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:46.220319 3219848 cri.go:89] found id: ""
	I1217 12:05:46.220345 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.220354 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:46.220360 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:46.220456 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:46.246104 3219848 cri.go:89] found id: ""
	I1217 12:05:46.246131 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.246140 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:46.246147 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:46.246208 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:46.281496 3219848 cri.go:89] found id: ""
	I1217 12:05:46.281520 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.281528 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:46.281535 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:46.281597 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:46.327477 3219848 cri.go:89] found id: ""
	I1217 12:05:46.327558 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.327582 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:46.327625 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:46.327653 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:46.407413 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:46.407451 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:46.423419 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:46.423448 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:46.489920 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:46.481686   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.482194   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.483773   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.484191   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.485671   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:46.481686   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.482194   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.483773   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.484191   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.485671   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:46.489945 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:46.489959 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:46.516022 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:46.516061 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:49.045130 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:49.056135 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:49.056216 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:49.080547 3219848 cri.go:89] found id: ""
	I1217 12:05:49.080568 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.080577 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:49.080583 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:49.080645 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:49.106805 3219848 cri.go:89] found id: ""
	I1217 12:05:49.106834 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.106844 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:49.106850 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:49.106911 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:49.132478 3219848 cri.go:89] found id: ""
	I1217 12:05:49.132501 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.132509 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:49.132515 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:49.132579 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:49.159868 3219848 cri.go:89] found id: ""
	I1217 12:05:49.159896 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.159906 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:49.159912 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:49.159971 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:49.187789 3219848 cri.go:89] found id: ""
	I1217 12:05:49.187814 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.187835 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:49.187843 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:49.187902 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:49.213461 3219848 cri.go:89] found id: ""
	I1217 12:05:49.213489 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.213498 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:49.213505 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:49.213612 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:49.240191 3219848 cri.go:89] found id: ""
	I1217 12:05:49.240220 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.240229 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:49.240247 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:49.240343 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:49.295243 3219848 cri.go:89] found id: ""
	I1217 12:05:49.295291 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.295306 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:49.295319 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:49.295331 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:49.359872 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:49.359903 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:49.427963 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:49.428002 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:49.444788 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:49.444818 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:49.510631 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:49.502008   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.502410   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.504142   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.504867   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.506554   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:49.502008   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.502410   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.504142   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.504867   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.506554   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:49.510652 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:49.510663 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:52.036765 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:52.049010 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:52.049084 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:52.075472 3219848 cri.go:89] found id: ""
	I1217 12:05:52.075500 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.075510 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:52.075517 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:52.075582 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:52.105198 3219848 cri.go:89] found id: ""
	I1217 12:05:52.105222 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.105231 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:52.105238 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:52.105295 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:52.133404 3219848 cri.go:89] found id: ""
	I1217 12:05:52.133428 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.133439 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:52.133445 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:52.133507 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:52.158170 3219848 cri.go:89] found id: ""
	I1217 12:05:52.158195 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.158205 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:52.158212 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:52.158270 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:52.182679 3219848 cri.go:89] found id: ""
	I1217 12:05:52.182704 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.182713 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:52.182720 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:52.182778 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:52.211743 3219848 cri.go:89] found id: ""
	I1217 12:05:52.211769 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.211778 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:52.211785 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:52.211845 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:52.237892 3219848 cri.go:89] found id: ""
	I1217 12:05:52.237918 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.237927 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:52.237933 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:52.237990 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:52.281029 3219848 cri.go:89] found id: ""
	I1217 12:05:52.281055 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.281063 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:52.281073 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:52.281089 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:52.374683 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:52.374721 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:52.390831 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:52.390863 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:52.454058 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:52.444629   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.445414   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.447158   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.447874   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.449604   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:52.444629   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.445414   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.447158   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.447874   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.449604   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:52.454081 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:52.454095 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:52.479410 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:52.479443 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:55.007287 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:55.021703 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:55.021785 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:55.047053 3219848 cri.go:89] found id: ""
	I1217 12:05:55.047076 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.047085 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:55.047092 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:55.047149 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:55.074641 3219848 cri.go:89] found id: ""
	I1217 12:05:55.074665 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.074674 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:55.074680 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:55.074742 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:55.103484 3219848 cri.go:89] found id: ""
	I1217 12:05:55.103512 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.103521 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:55.103527 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:55.103586 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:55.132461 3219848 cri.go:89] found id: ""
	I1217 12:05:55.132487 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.132497 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:55.132503 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:55.132561 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:55.157595 3219848 cri.go:89] found id: ""
	I1217 12:05:55.157618 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.157626 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:55.157632 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:55.157694 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:55.187334 3219848 cri.go:89] found id: ""
	I1217 12:05:55.187354 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.187364 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:55.187371 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:55.187529 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:55.212469 3219848 cri.go:89] found id: ""
	I1217 12:05:55.212492 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.212501 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:55.212508 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:55.212567 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:55.238155 3219848 cri.go:89] found id: ""
	I1217 12:05:55.238188 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.238198 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:55.238208 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:55.238237 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:55.361507 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:55.352214   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.352982   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.354653   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.355179   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.356793   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:55.352214   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.352982   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.354653   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.355179   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.356793   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:55.361529 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:55.361542 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:55.387722 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:55.387760 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:55.415663 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:55.415688 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:55.471304 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:55.471342 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:57.988615 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:57.999088 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:57.999163 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:58.029910 3219848 cri.go:89] found id: ""
	I1217 12:05:58.029938 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.029948 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:58.029955 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:58.030021 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:58.056383 3219848 cri.go:89] found id: ""
	I1217 12:05:58.056409 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.056461 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:58.056468 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:58.056526 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:58.082442 3219848 cri.go:89] found id: ""
	I1217 12:05:58.082468 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.082477 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:58.082483 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:58.082543 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:58.110467 3219848 cri.go:89] found id: ""
	I1217 12:05:58.110491 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.110500 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:58.110507 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:58.110574 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:58.136852 3219848 cri.go:89] found id: ""
	I1217 12:05:58.136879 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.136888 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:58.136895 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:58.136976 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:58.163746 3219848 cri.go:89] found id: ""
	I1217 12:05:58.163772 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.163782 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:58.163788 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:58.163847 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:58.190425 3219848 cri.go:89] found id: ""
	I1217 12:05:58.190451 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.190460 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:58.190467 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:58.190529 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:58.220315 3219848 cri.go:89] found id: ""
	I1217 12:05:58.220338 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.220347 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:58.220358 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:58.220368 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:58.290204 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:58.290287 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:58.323039 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:58.323120 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:58.402482 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:58.393790   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.394615   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.396214   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.396884   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.398347   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:58.393790   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.394615   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.396214   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.396884   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.398347   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:58.402504 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:58.402521 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:58.428716 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:58.428754 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:00.959753 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:00.970910 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:00.970990 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:01.005870 3219848 cri.go:89] found id: ""
	I1217 12:06:01.005941 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.005958 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:01.005967 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:01.006031 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:01.034724 3219848 cri.go:89] found id: ""
	I1217 12:06:01.034747 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.034756 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:01.034765 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:01.034823 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:01.059798 3219848 cri.go:89] found id: ""
	I1217 12:06:01.059824 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.059836 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:01.059842 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:01.059900 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:01.089347 3219848 cri.go:89] found id: ""
	I1217 12:06:01.089370 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.089378 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:01.089385 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:01.089448 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:01.115166 3219848 cri.go:89] found id: ""
	I1217 12:06:01.115201 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.115211 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:01.115218 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:01.115286 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:01.142081 3219848 cri.go:89] found id: ""
	I1217 12:06:01.142109 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.142118 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:01.142125 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:01.142211 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:01.173172 3219848 cri.go:89] found id: ""
	I1217 12:06:01.173198 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.173208 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:01.173215 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:01.173280 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:01.200453 3219848 cri.go:89] found id: ""
	I1217 12:06:01.200477 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.200486 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:01.200496 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:01.200506 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:01.226189 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:01.226231 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:01.283020 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:01.283101 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:01.360095 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:01.360131 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:01.377017 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:01.377049 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:01.442041 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:01.434467   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.434821   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.436378   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.436785   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.438222   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:01.434467   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.434821   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.436378   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.436785   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.438222   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:03.943920 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:03.955271 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:03.955384 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:03.987078 3219848 cri.go:89] found id: ""
	I1217 12:06:03.987106 3219848 logs.go:282] 0 containers: []
	W1217 12:06:03.987115 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:03.987124 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:03.987185 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:04.020179 3219848 cri.go:89] found id: ""
	I1217 12:06:04.020207 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.020243 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:04.020250 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:04.020328 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:04.049457 3219848 cri.go:89] found id: ""
	I1217 12:06:04.049484 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.049494 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:04.049500 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:04.049565 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:04.077274 3219848 cri.go:89] found id: ""
	I1217 12:06:04.077302 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.077311 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:04.077318 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:04.077386 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:04.108697 3219848 cri.go:89] found id: ""
	I1217 12:06:04.108725 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.108734 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:04.108740 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:04.108800 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:04.133870 3219848 cri.go:89] found id: ""
	I1217 12:06:04.133949 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.133974 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:04.133988 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:04.134075 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:04.158589 3219848 cri.go:89] found id: ""
	I1217 12:06:04.158616 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.158625 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:04.158632 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:04.158705 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:04.182544 3219848 cri.go:89] found id: ""
	I1217 12:06:04.182568 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.182577 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:04.182605 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:04.182630 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:04.198694 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:04.198722 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:04.286551 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:04.273260   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.277107   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.278962   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.279268   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.280776   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:04.273260   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.277107   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.278962   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.279268   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.280776   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:04.286576 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:04.286587 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:04.322177 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:04.322211 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:04.362745 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:04.362774 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:06.922523 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:06.933191 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:06.933262 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:06.962648 3219848 cri.go:89] found id: ""
	I1217 12:06:06.962675 3219848 logs.go:282] 0 containers: []
	W1217 12:06:06.962685 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:06.962692 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:06.962750 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:06.991732 3219848 cri.go:89] found id: ""
	I1217 12:06:06.991757 3219848 logs.go:282] 0 containers: []
	W1217 12:06:06.991765 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:06.991772 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:06.991829 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:07.018557 3219848 cri.go:89] found id: ""
	I1217 12:06:07.018584 3219848 logs.go:282] 0 containers: []
	W1217 12:06:07.018594 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:07.018600 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:07.018659 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:07.044679 3219848 cri.go:89] found id: ""
	I1217 12:06:07.044704 3219848 logs.go:282] 0 containers: []
	W1217 12:06:07.044713 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:07.044720 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:07.044786 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:07.073836 3219848 cri.go:89] found id: ""
	I1217 12:06:07.073905 3219848 logs.go:282] 0 containers: []
	W1217 12:06:07.073930 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:07.073944 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:07.074020 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:07.100945 3219848 cri.go:89] found id: ""
	I1217 12:06:07.100972 3219848 logs.go:282] 0 containers: []
	W1217 12:06:07.100982 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:07.100989 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:07.101094 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:07.125935 3219848 cri.go:89] found id: ""
	I1217 12:06:07.125963 3219848 logs.go:282] 0 containers: []
	W1217 12:06:07.125972 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:07.125978 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:07.126061 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:07.151599 3219848 cri.go:89] found id: ""
	I1217 12:06:07.151624 3219848 logs.go:282] 0 containers: []
	W1217 12:06:07.151633 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:07.151641 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:07.151653 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:07.167414 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:07.167439 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:07.235174 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:07.226345   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.226997   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.228627   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.229347   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.231069   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:07.226345   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.226997   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.228627   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.229347   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.231069   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:07.235246 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:07.235266 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:07.264720 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:07.264754 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:07.349181 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:07.349210 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:09.906484 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:09.917044 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:09.917120 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:09.941939 3219848 cri.go:89] found id: ""
	I1217 12:06:09.942004 3219848 logs.go:282] 0 containers: []
	W1217 12:06:09.942024 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:09.942031 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:09.942088 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:09.966481 3219848 cri.go:89] found id: ""
	I1217 12:06:09.966507 3219848 logs.go:282] 0 containers: []
	W1217 12:06:09.966515 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:09.966523 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:09.966622 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:09.991806 3219848 cri.go:89] found id: ""
	I1217 12:06:09.991830 3219848 logs.go:282] 0 containers: []
	W1217 12:06:09.991839 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:09.991845 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:09.991901 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:10.027713 3219848 cri.go:89] found id: ""
	I1217 12:06:10.027784 3219848 logs.go:282] 0 containers: []
	W1217 12:06:10.027800 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:10.027808 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:10.027874 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:10.060097 3219848 cri.go:89] found id: ""
	I1217 12:06:10.060124 3219848 logs.go:282] 0 containers: []
	W1217 12:06:10.060133 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:10.060140 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:10.060203 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:10.091977 3219848 cri.go:89] found id: ""
	I1217 12:06:10.092002 3219848 logs.go:282] 0 containers: []
	W1217 12:06:10.092010 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:10.092018 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:10.092081 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:10.118481 3219848 cri.go:89] found id: ""
	I1217 12:06:10.118504 3219848 logs.go:282] 0 containers: []
	W1217 12:06:10.118513 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:10.118526 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:10.118586 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:10.145196 3219848 cri.go:89] found id: ""
	I1217 12:06:10.145263 3219848 logs.go:282] 0 containers: []
	W1217 12:06:10.145278 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:10.145288 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:10.145306 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:10.161573 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:10.161603 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:10.227235 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:10.218460   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.219270   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.220964   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.221573   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.223258   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:10.218460   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.219270   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.220964   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.221573   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.223258   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:10.227259 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:10.227273 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:10.253333 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:10.253644 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:10.302209 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:10.302284 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:12.881891 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:12.892449 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:12.892519 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:12.919824 3219848 cri.go:89] found id: ""
	I1217 12:06:12.919848 3219848 logs.go:282] 0 containers: []
	W1217 12:06:12.919856 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:12.919863 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:12.919924 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:12.946684 3219848 cri.go:89] found id: ""
	I1217 12:06:12.946711 3219848 logs.go:282] 0 containers: []
	W1217 12:06:12.946721 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:12.946728 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:12.946808 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:12.970796 3219848 cri.go:89] found id: ""
	I1217 12:06:12.970820 3219848 logs.go:282] 0 containers: []
	W1217 12:06:12.970830 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:12.970837 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:12.970904 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:12.996393 3219848 cri.go:89] found id: ""
	I1217 12:06:12.996459 3219848 logs.go:282] 0 containers: []
	W1217 12:06:12.996469 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:12.996476 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:12.996538 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:13.022560 3219848 cri.go:89] found id: ""
	I1217 12:06:13.022587 3219848 logs.go:282] 0 containers: []
	W1217 12:06:13.022596 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:13.022603 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:13.022664 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:13.050809 3219848 cri.go:89] found id: ""
	I1217 12:06:13.050839 3219848 logs.go:282] 0 containers: []
	W1217 12:06:13.050849 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:13.050856 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:13.050919 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:13.077432 3219848 cri.go:89] found id: ""
	I1217 12:06:13.077460 3219848 logs.go:282] 0 containers: []
	W1217 12:06:13.077469 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:13.077477 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:13.077540 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:13.104029 3219848 cri.go:89] found id: ""
	I1217 12:06:13.104056 3219848 logs.go:282] 0 containers: []
	W1217 12:06:13.104065 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:13.104075 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:13.104086 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:13.162000 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:13.162038 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:13.177865 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:13.177891 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:13.241266 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:13.232767   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.233565   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.235109   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.235417   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.236871   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:13.232767   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.233565   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.235109   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.235417   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.236871   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:13.241289 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:13.241302 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:13.271232 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:13.271269 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:15.839567 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:15.850326 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:15.850396 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:15.875471 3219848 cri.go:89] found id: ""
	I1217 12:06:15.875493 3219848 logs.go:282] 0 containers: []
	W1217 12:06:15.875502 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:15.875509 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:15.875566 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:15.899977 3219848 cri.go:89] found id: ""
	I1217 12:06:15.899998 3219848 logs.go:282] 0 containers: []
	W1217 12:06:15.900007 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:15.900013 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:15.900073 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:15.926093 3219848 cri.go:89] found id: ""
	I1217 12:06:15.926117 3219848 logs.go:282] 0 containers: []
	W1217 12:06:15.926126 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:15.926133 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:15.926193 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:15.951373 3219848 cri.go:89] found id: ""
	I1217 12:06:15.951397 3219848 logs.go:282] 0 containers: []
	W1217 12:06:15.951407 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:15.951414 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:15.951470 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:15.976937 3219848 cri.go:89] found id: ""
	I1217 12:06:15.976963 3219848 logs.go:282] 0 containers: []
	W1217 12:06:15.976972 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:15.976979 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:15.977041 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:16.003518 3219848 cri.go:89] found id: ""
	I1217 12:06:16.003717 3219848 logs.go:282] 0 containers: []
	W1217 12:06:16.003750 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:16.003786 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:16.003901 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:16.032115 3219848 cri.go:89] found id: ""
	I1217 12:06:16.032142 3219848 logs.go:282] 0 containers: []
	W1217 12:06:16.032151 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:16.032159 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:16.032219 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:16.061490 3219848 cri.go:89] found id: ""
	I1217 12:06:16.061517 3219848 logs.go:282] 0 containers: []
	W1217 12:06:16.061526 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:16.061536 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:16.061547 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:16.077146 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:16.077179 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:16.145955 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:16.137946   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.138559   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.140379   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.140910   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.142053   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:16.137946   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.138559   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.140379   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.140910   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.142053   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:16.145981 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:16.145995 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:16.172145 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:16.172180 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:16.206805 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:16.206833 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:18.766689 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:18.777034 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:18.777108 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:18.805815 3219848 cri.go:89] found id: ""
	I1217 12:06:18.805838 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.805847 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:18.805853 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:18.805910 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:18.831468 3219848 cri.go:89] found id: ""
	I1217 12:06:18.831492 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.831501 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:18.831508 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:18.831567 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:18.859309 3219848 cri.go:89] found id: ""
	I1217 12:06:18.859339 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.859349 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:18.859368 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:18.859436 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:18.884524 3219848 cri.go:89] found id: ""
	I1217 12:06:18.884552 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.884561 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:18.884569 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:18.884665 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:18.909522 3219848 cri.go:89] found id: ""
	I1217 12:06:18.909545 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.909554 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:18.909561 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:18.909620 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:18.935126 3219848 cri.go:89] found id: ""
	I1217 12:06:18.935151 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.935161 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:18.935167 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:18.935227 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:18.964480 3219848 cri.go:89] found id: ""
	I1217 12:06:18.964506 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.964516 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:18.964522 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:18.964581 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:18.990408 3219848 cri.go:89] found id: ""
	I1217 12:06:18.990435 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.990444 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:18.990454 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:18.990466 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:19.017937 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:19.017974 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:19.048976 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:19.049004 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:19.108146 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:19.108184 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:19.125457 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:19.125507 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:19.190960 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:19.182754   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.183274   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.185018   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.185416   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.186923   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:19.182754   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.183274   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.185018   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.185416   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.186923   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:21.691321 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:21.702288 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:21.702373 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:21.728533 3219848 cri.go:89] found id: ""
	I1217 12:06:21.728561 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.728571 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:21.728577 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:21.728645 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:21.755298 3219848 cri.go:89] found id: ""
	I1217 12:06:21.755323 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.755333 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:21.755345 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:21.755403 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:21.784470 3219848 cri.go:89] found id: ""
	I1217 12:06:21.784494 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.784503 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:21.784509 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:21.784568 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:21.811503 3219848 cri.go:89] found id: ""
	I1217 12:06:21.811528 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.811538 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:21.811544 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:21.811602 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:21.841147 3219848 cri.go:89] found id: ""
	I1217 12:06:21.841212 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.841227 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:21.841241 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:21.841303 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:21.867736 3219848 cri.go:89] found id: ""
	I1217 12:06:21.867763 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.867773 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:21.867779 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:21.867847 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:21.897039 3219848 cri.go:89] found id: ""
	I1217 12:06:21.897104 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.897121 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:21.897128 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:21.897187 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:21.922398 3219848 cri.go:89] found id: ""
	I1217 12:06:21.922420 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.922429 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:21.922438 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:21.922449 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:21.980203 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:21.980241 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:21.996482 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:21.996513 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:22.074426 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:22.061574   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.062326   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.064118   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.064738   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.070487   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:22.061574   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.062326   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.064118   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.064738   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.070487   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:22.074474 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:22.074488 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:22.101174 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:22.101210 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:24.630003 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:24.640702 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:24.640773 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:24.666366 3219848 cri.go:89] found id: ""
	I1217 12:06:24.666390 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.666399 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:24.666408 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:24.666465 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:24.693372 3219848 cri.go:89] found id: ""
	I1217 12:06:24.693398 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.693407 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:24.693413 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:24.693478 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:24.723159 3219848 cri.go:89] found id: ""
	I1217 12:06:24.723181 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.723190 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:24.723197 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:24.723264 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:24.747933 3219848 cri.go:89] found id: ""
	I1217 12:06:24.747960 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.747969 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:24.747976 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:24.748044 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:24.774083 3219848 cri.go:89] found id: ""
	I1217 12:06:24.774105 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.774113 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:24.774120 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:24.774186 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:24.808050 3219848 cri.go:89] found id: ""
	I1217 12:06:24.808076 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.808085 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:24.808092 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:24.808200 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:24.833993 3219848 cri.go:89] found id: ""
	I1217 12:06:24.834070 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.834085 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:24.834093 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:24.834153 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:24.860654 3219848 cri.go:89] found id: ""
	I1217 12:06:24.860679 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.860688 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:24.860697 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:24.860708 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:24.917182 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:24.917265 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:24.933462 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:24.933491 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:25.002903 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:24.992978   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.993789   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.995410   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.996068   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.997870   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:24.992978   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.993789   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.995410   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.996068   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.997870   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:25.002927 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:25.002960 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:25.031774 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:25.031809 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:27.560620 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:27.575695 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:27.575766 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:27.603395 3219848 cri.go:89] found id: ""
	I1217 12:06:27.603421 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.603430 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:27.603436 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:27.603498 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:27.628716 3219848 cri.go:89] found id: ""
	I1217 12:06:27.628739 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.628747 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:27.628754 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:27.628810 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:27.653566 3219848 cri.go:89] found id: ""
	I1217 12:06:27.653629 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.653653 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:27.653679 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:27.653756 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:27.679125 3219848 cri.go:89] found id: ""
	I1217 12:06:27.679150 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.679159 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:27.679166 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:27.679245 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:27.705566 3219848 cri.go:89] found id: ""
	I1217 12:06:27.705632 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.705656 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:27.705677 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:27.705762 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:27.730473 3219848 cri.go:89] found id: ""
	I1217 12:06:27.730541 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.730556 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:27.730564 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:27.730639 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:27.755451 3219848 cri.go:89] found id: ""
	I1217 12:06:27.755476 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.755485 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:27.755492 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:27.755552 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:27.783637 3219848 cri.go:89] found id: ""
	I1217 12:06:27.783663 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.783673 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:27.783682 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:27.783693 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:27.815668 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:27.815707 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:27.846761 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:27.846788 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:27.903961 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:27.903992 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:27.920251 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:27.920285 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:27.989986 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:27.982512   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.983471   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.984453   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.984910   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.985983   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:27.982512   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.983471   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.984453   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.984910   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.985983   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:30.490267 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:30.501854 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:30.501936 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:30.577313 3219848 cri.go:89] found id: ""
	I1217 12:06:30.577342 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.577352 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:30.577376 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:30.577460 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:30.606634 3219848 cri.go:89] found id: ""
	I1217 12:06:30.606660 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.606670 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:30.606676 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:30.606744 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:30.632310 3219848 cri.go:89] found id: ""
	I1217 12:06:30.632342 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.632351 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:30.632358 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:30.632473 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:30.658929 3219848 cri.go:89] found id: ""
	I1217 12:06:30.658960 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.658970 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:30.658976 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:30.659036 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:30.690494 3219848 cri.go:89] found id: ""
	I1217 12:06:30.690519 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.690529 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:30.690535 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:30.690598 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:30.716270 3219848 cri.go:89] found id: ""
	I1217 12:06:30.716295 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.716305 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:30.716312 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:30.716396 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:30.743684 3219848 cri.go:89] found id: ""
	I1217 12:06:30.743720 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.743738 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:30.743745 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:30.743823 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:30.771862 3219848 cri.go:89] found id: ""
	I1217 12:06:30.771895 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.771905 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:30.771915 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:30.771928 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:30.829962 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:30.829997 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:30.846244 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:30.846269 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:30.910789 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:30.902355   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.903184   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.904920   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.905376   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.906932   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:30.902355   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.903184   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.904920   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.905376   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.906932   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:30.910812 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:30.910825 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:30.937515 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:30.937552 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:33.467661 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:33.479263 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:33.479335 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:33.531382 3219848 cri.go:89] found id: ""
	I1217 12:06:33.531405 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.531414 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:33.531420 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:33.531491 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:33.586607 3219848 cri.go:89] found id: ""
	I1217 12:06:33.586628 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.586637 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:33.586651 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:33.586708 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:33.622903 3219848 cri.go:89] found id: ""
	I1217 12:06:33.622925 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.622934 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:33.622940 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:33.623012 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:33.652846 3219848 cri.go:89] found id: ""
	I1217 12:06:33.652874 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.652882 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:33.652889 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:33.652946 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:33.677852 3219848 cri.go:89] found id: ""
	I1217 12:06:33.677877 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.677886 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:33.677893 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:33.677972 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:33.706815 3219848 cri.go:89] found id: ""
	I1217 12:06:33.706840 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.706849 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:33.706856 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:33.706918 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:33.736780 3219848 cri.go:89] found id: ""
	I1217 12:06:33.736806 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.736816 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:33.736822 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:33.736880 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:33.761376 3219848 cri.go:89] found id: ""
	I1217 12:06:33.761414 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.761424 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:33.761433 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:33.761445 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:33.819076 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:33.819113 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:33.835282 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:33.835311 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:33.903109 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:33.894518   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.895131   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.896856   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.897422   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.899092   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:33.894518   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.895131   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.896856   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.897422   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.899092   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:33.903181 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:33.903206 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:33.935593 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:33.935636 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:36.469816 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:36.480311 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:36.480394 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:36.522000 3219848 cri.go:89] found id: ""
	I1217 12:06:36.522026 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.522035 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:36.522041 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:36.522098 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:36.584783 3219848 cri.go:89] found id: ""
	I1217 12:06:36.584811 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.584819 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:36.584825 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:36.584885 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:36.614443 3219848 cri.go:89] found id: ""
	I1217 12:06:36.614469 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.614478 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:36.614484 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:36.614543 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:36.642952 3219848 cri.go:89] found id: ""
	I1217 12:06:36.642974 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.642982 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:36.642989 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:36.643047 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:36.667989 3219848 cri.go:89] found id: ""
	I1217 12:06:36.668011 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.668019 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:36.668025 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:36.668109 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:36.696974 3219848 cri.go:89] found id: ""
	I1217 12:06:36.697049 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.697062 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:36.697096 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:36.697191 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:36.723789 3219848 cri.go:89] found id: ""
	I1217 12:06:36.723812 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.723821 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:36.723828 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:36.723885 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:36.748007 3219848 cri.go:89] found id: ""
	I1217 12:06:36.748078 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.748102 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:36.748126 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:36.748167 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:36.778526 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:36.778554 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:36.834614 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:36.834648 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:36.852247 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:36.852276 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:36.920099 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:36.911022   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.911723   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.913310   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.913643   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.915127   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:36.911022   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.911723   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.913310   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.913643   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.915127   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:36.920123 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:36.920135 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:39.447091 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:39.457670 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:39.457740 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:39.481236 3219848 cri.go:89] found id: ""
	I1217 12:06:39.481260 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.481269 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:39.481276 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:39.481333 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:39.539773 3219848 cri.go:89] found id: ""
	I1217 12:06:39.539800 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.539810 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:39.539817 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:39.539879 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:39.586024 3219848 cri.go:89] found id: ""
	I1217 12:06:39.586053 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.586069 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:39.586075 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:39.586133 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:39.614247 3219848 cri.go:89] found id: ""
	I1217 12:06:39.614272 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.614281 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:39.614288 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:39.614348 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:39.639817 3219848 cri.go:89] found id: ""
	I1217 12:06:39.639840 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.639848 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:39.639855 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:39.639910 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:39.663356 3219848 cri.go:89] found id: ""
	I1217 12:06:39.663382 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.663390 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:39.663397 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:39.663457 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:39.692611 3219848 cri.go:89] found id: ""
	I1217 12:06:39.692638 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.692647 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:39.692654 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:39.692714 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:39.718640 3219848 cri.go:89] found id: ""
	I1217 12:06:39.718665 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.718674 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:39.718686 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:39.718698 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:39.743735 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:39.743776 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:39.776101 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:39.776130 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:39.839871 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:39.839912 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:39.856925 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:39.856956 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:39.927715 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:39.920216   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.920790   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.921971   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.922477   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.923988   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:39.920216   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.920790   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.921971   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.922477   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.923988   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:42.428378 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:42.439785 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:42.439861 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:42.467826 3219848 cri.go:89] found id: ""
	I1217 12:06:42.467849 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.467857 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:42.467864 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:42.467928 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:42.492505 3219848 cri.go:89] found id: ""
	I1217 12:06:42.492533 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.492542 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:42.492549 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:42.492607 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:42.562039 3219848 cri.go:89] found id: ""
	I1217 12:06:42.562062 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.562071 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:42.562077 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:42.562147 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:42.600111 3219848 cri.go:89] found id: ""
	I1217 12:06:42.600139 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.600148 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:42.600155 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:42.600218 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:42.631003 3219848 cri.go:89] found id: ""
	I1217 12:06:42.631026 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.631035 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:42.631042 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:42.631101 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:42.655257 3219848 cri.go:89] found id: ""
	I1217 12:06:42.655283 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.655292 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:42.655305 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:42.655366 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:42.681199 3219848 cri.go:89] found id: ""
	I1217 12:06:42.681220 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.681229 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:42.681236 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:42.681295 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:42.706511 3219848 cri.go:89] found id: ""
	I1217 12:06:42.706535 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.706544 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:42.706553 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:42.706565 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:42.762839 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:42.762875 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:42.779904 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:42.779936 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:42.849079 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:42.840586   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.841187   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.842724   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.843182   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.844615   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:42.840586   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.841187   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.842724   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.843182   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.844615   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:42.849103 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:42.849114 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:42.874488 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:42.874529 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:45.406478 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:45.417919 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:45.417989 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:45.445578 3219848 cri.go:89] found id: ""
	I1217 12:06:45.445614 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.445624 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:45.445632 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:45.445694 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:45.477590 3219848 cri.go:89] found id: ""
	I1217 12:06:45.477674 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.477699 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:45.477735 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:45.477831 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:45.515743 3219848 cri.go:89] found id: ""
	I1217 12:06:45.515765 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.515774 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:45.515781 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:45.515840 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:45.550588 3219848 cri.go:89] found id: ""
	I1217 12:06:45.550610 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.550619 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:45.550626 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:45.550684 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:45.595764 3219848 cri.go:89] found id: ""
	I1217 12:06:45.595785 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.595794 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:45.595802 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:45.595862 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:45.621971 3219848 cri.go:89] found id: ""
	I1217 12:06:45.621994 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.622003 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:45.622010 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:45.622077 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:45.648142 3219848 cri.go:89] found id: ""
	I1217 12:06:45.648176 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.648186 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:45.648193 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:45.648266 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:45.677328 3219848 cri.go:89] found id: ""
	I1217 12:06:45.677364 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.677373 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:45.677383 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:45.677401 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:45.750976 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:45.739342   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.739999   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.744563   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.745136   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.746629   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:45.739342   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.739999   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.744563   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.745136   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.746629   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:45.750998 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:45.751012 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:45.777019 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:45.777056 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:45.805927 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:45.805957 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:45.861380 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:45.861414 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:48.377400 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:48.388086 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:48.388158 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:48.412282 3219848 cri.go:89] found id: ""
	I1217 12:06:48.412305 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.412313 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:48.412320 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:48.412377 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:48.437811 3219848 cri.go:89] found id: ""
	I1217 12:06:48.437846 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.437856 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:48.437879 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:48.437953 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:48.462517 3219848 cri.go:89] found id: ""
	I1217 12:06:48.462539 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.462547 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:48.462557 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:48.462615 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:48.486379 3219848 cri.go:89] found id: ""
	I1217 12:06:48.486402 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.486411 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:48.486418 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:48.486475 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:48.582544 3219848 cri.go:89] found id: ""
	I1217 12:06:48.582569 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.582578 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:48.582585 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:48.582691 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:48.612954 3219848 cri.go:89] found id: ""
	I1217 12:06:48.612980 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.612990 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:48.612997 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:48.613058 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:48.638059 3219848 cri.go:89] found id: ""
	I1217 12:06:48.638083 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.638091 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:48.638098 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:48.638160 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:48.663252 3219848 cri.go:89] found id: ""
	I1217 12:06:48.663278 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.663288 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:48.663298 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:48.663308 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:48.719388 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:48.719422 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:48.735198 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:48.735227 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:48.801972 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:48.793731   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.794319   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.795999   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.796684   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.798278   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:48.793731   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.794319   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.795999   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.796684   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.798278   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:48.801995 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:48.802008 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:48.827753 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:48.827787 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:51.362888 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:51.373695 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:51.373779 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:51.399521 3219848 cri.go:89] found id: ""
	I1217 12:06:51.399547 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.399556 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:51.399563 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:51.399620 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:51.425074 3219848 cri.go:89] found id: ""
	I1217 12:06:51.425140 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.425154 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:51.425161 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:51.425219 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:51.449708 3219848 cri.go:89] found id: ""
	I1217 12:06:51.449731 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.449740 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:51.449746 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:51.449818 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:51.478561 3219848 cri.go:89] found id: ""
	I1217 12:06:51.478585 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.478594 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:51.478601 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:51.478687 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:51.520104 3219848 cri.go:89] found id: ""
	I1217 12:06:51.520142 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.520152 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:51.520159 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:51.520227 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:51.589783 3219848 cri.go:89] found id: ""
	I1217 12:06:51.589826 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.589836 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:51.589843 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:51.589914 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:51.616852 3219848 cri.go:89] found id: ""
	I1217 12:06:51.616888 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.616898 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:51.616904 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:51.616967 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:51.643529 3219848 cri.go:89] found id: ""
	I1217 12:06:51.643609 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.643632 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:51.643661 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:51.643706 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:51.707671 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:51.699393   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.700178   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.701673   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.702158   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.703665   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:51.699393   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.700178   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.701673   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.702158   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.703665   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:51.707744 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:51.707772 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:51.733586 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:51.733622 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:51.763883 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:51.763912 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:51.818754 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:51.818788 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:54.336140 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:54.350294 3219848 out.go:203] 
	W1217 12:06:54.353246 3219848 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1217 12:06:54.353303 3219848 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1217 12:06:54.353317 3219848 out.go:285] * Related issues:
	* Related issues:
	W1217 12:06:54.353339 3219848 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1217 12:06:54.353354 3219848 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1217 12:06:54.356285 3219848 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p newest-cni-669680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1": exit status 105
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-669680
helpers_test.go:244: (dbg) docker inspect newest-cni-669680:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc",
	        "Created": "2025-12-17T11:50:38.904543162Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3219980,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T12:00:44.656180291Z",
	            "FinishedAt": "2025-12-17T12:00:43.27484179Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc/hostname",
	        "HostsPath": "/var/lib/docker/containers/23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc/hosts",
	        "LogPath": "/var/lib/docker/containers/23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc/23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc-json.log",
	        "Name": "/newest-cni-669680",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-669680:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-669680",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc",
	                "LowerDir": "/var/lib/docker/overlay2/0a323466537bd2b84d1c1de1e658b41a849cb439639dbed3e328ce82ab171fd3-init/diff:/var/lib/docker/overlay2/aa1c3cb837db05afa9c265c464cc269fa9c11658f422c1c8858e1287ac952f12/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0a323466537bd2b84d1c1de1e658b41a849cb439639dbed3e328ce82ab171fd3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0a323466537bd2b84d1c1de1e658b41a849cb439639dbed3e328ce82ab171fd3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0a323466537bd2b84d1c1de1e658b41a849cb439639dbed3e328ce82ab171fd3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-669680",
	                "Source": "/var/lib/docker/volumes/newest-cni-669680/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-669680",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-669680",
	                "name.minikube.sigs.k8s.io": "newest-cni-669680",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9f695758c865267c895635ea7898bf1b9d81e4dd5864219138eceead759e9a1b",
	            "SandboxKey": "/var/run/docker/netns/9f695758c865",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-669680": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:62:0f:03:13:0e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e84740d61c89f51b13c32d88b9c5aafc9e8e1ba5e275e3db72c9a38077e44a94",
	                    "EndpointID": "b90d44188d07afa11a62007f533d5391259eb969677e3f00be6723f39985284a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-669680",
	                        "23474ef32ddb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-669680 -n newest-cni-669680
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-669680 -n newest-cni-669680: exit status 2 (357.206609ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-669680 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-669680 logs -n 25: (1.553072202s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ image   │ embed-certs-628462 image list --format=json                                                                                                                                                                                                              │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ pause   │ -p embed-certs-628462 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ unpause │ -p embed-certs-628462 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p embed-certs-628462                                                                                                                                                                                                                                    │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p embed-certs-628462                                                                                                                                                                                                                                    │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p disable-driver-mounts-003095                                                                                                                                                                                                                          │ disable-driver-mounts-003095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ start   │ -p default-k8s-diff-port-224095 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:49 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-224095 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ stop    │ -p default-k8s-diff-port-224095 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-224095 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ start   │ -p default-k8s-diff-port-224095 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:50 UTC │
	│ image   │ default-k8s-diff-port-224095 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ pause   │ -p default-k8s-diff-port-224095 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ unpause │ -p default-k8s-diff-port-224095 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ delete  │ -p default-k8s-diff-port-224095                                                                                                                                                                                                                          │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ delete  │ -p default-k8s-diff-port-224095                                                                                                                                                                                                                          │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ start   │ -p newest-cni-669680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-118262 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	│ stop    │ -p no-preload-118262 --alsologtostderr -v=3                                                                                                                                                                                                              │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ addons  │ enable dashboard -p no-preload-118262 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ start   │ -p no-preload-118262 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-669680 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 11:58 UTC │                     │
	│ stop    │ -p newest-cni-669680 --alsologtostderr -v=3                                                                                                                                                                                                              │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 12:00 UTC │ 17 Dec 25 12:00 UTC │
	│ addons  │ enable dashboard -p newest-cni-669680 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 12:00 UTC │ 17 Dec 25 12:00 UTC │
	│ start   │ -p newest-cni-669680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 12:00 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 12:00:44
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 12:00:44.347526 3219848 out.go:360] Setting OutFile to fd 1 ...
	I1217 12:00:44.347663 3219848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:00:44.347673 3219848 out.go:374] Setting ErrFile to fd 2...
	I1217 12:00:44.347678 3219848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:00:44.347938 3219848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 12:00:44.348321 3219848 out.go:368] Setting JSON to false
	I1217 12:00:44.349222 3219848 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":63795,"bootTime":1765909050,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 12:00:44.349300 3219848 start.go:143] virtualization:  
	I1217 12:00:44.352466 3219848 out.go:179] * [newest-cni-669680] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 12:00:44.356190 3219848 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 12:00:44.356282 3219848 notify.go:221] Checking for updates...
	I1217 12:00:44.362135 3219848 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 12:00:44.365177 3219848 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 12:00:44.368881 3219848 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 12:00:44.372015 3219848 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 12:00:44.375014 3219848 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 12:00:44.378336 3219848 config.go:182] Loaded profile config "newest-cni-669680": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 12:00:44.378951 3219848 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 12:00:44.413369 3219848 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 12:00:44.413513 3219848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 12:00:44.473970 3219848 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 12:00:44.464532408 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 12:00:44.474081 3219848 docker.go:319] overlay module found
	I1217 12:00:44.477205 3219848 out.go:179] * Using the docker driver based on existing profile
	I1217 12:00:44.480155 3219848 start.go:309] selected driver: docker
	I1217 12:00:44.480182 3219848 start.go:927] validating driver "docker" against &{Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:00:44.480300 3219848 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 12:00:44.481122 3219848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 12:00:44.568687 3219848 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 12:00:44.559079636 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 12:00:44.569054 3219848 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 12:00:44.569088 3219848 cni.go:84] Creating CNI manager for ""
	I1217 12:00:44.569145 3219848 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 12:00:44.569196 3219848 start.go:353] cluster config:
	{Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:00:44.574245 3219848 out.go:179] * Starting "newest-cni-669680" primary control-plane node in "newest-cni-669680" cluster
	I1217 12:00:44.576964 3219848 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 12:00:44.579814 3219848 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 12:00:44.582545 3219848 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 12:00:44.582593 3219848 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1217 12:00:44.582604 3219848 cache.go:65] Caching tarball of preloaded images
	I1217 12:00:44.582624 3219848 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 12:00:44.582700 3219848 preload.go:238] Found /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 12:00:44.582711 3219848 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1217 12:00:44.582826 3219848 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/config.json ...
	I1217 12:00:44.602190 3219848 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 12:00:44.602216 3219848 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 12:00:44.602262 3219848 cache.go:243] Successfully downloaded all kic artifacts
	I1217 12:00:44.602326 3219848 start.go:360] acquireMachinesLock for newest-cni-669680: {Name:mk48c8383b245a4b70f2208fe2e76b80693bbb09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 12:00:44.602428 3219848 start.go:364] duration metric: took 68.29µs to acquireMachinesLock for "newest-cni-669680"
	I1217 12:00:44.602457 3219848 start.go:96] Skipping create...Using existing machine configuration
	I1217 12:00:44.602505 3219848 fix.go:54] fixHost starting: 
	I1217 12:00:44.602917 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:44.620734 3219848 fix.go:112] recreateIfNeeded on newest-cni-669680: state=Stopped err=<nil>
	W1217 12:00:44.620765 3219848 fix.go:138] unexpected machine state, will restart: <nil>
	W1217 12:00:44.760258 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:46.760539 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:00:44.623987 3219848 out.go:252] * Restarting existing docker container for "newest-cni-669680" ...
	I1217 12:00:44.624072 3219848 cli_runner.go:164] Run: docker start newest-cni-669680
	I1217 12:00:44.870900 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:44.893559 3219848 kic.go:432] container "newest-cni-669680" state is running.
	I1217 12:00:44.894282 3219848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 12:00:44.917205 3219848 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/config.json ...
	I1217 12:00:44.917570 3219848 machine.go:94] provisionDockerMachine start ...
	I1217 12:00:44.917645 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:44.945980 3219848 main.go:143] libmachine: Using SSH client type: native
	I1217 12:00:44.946096 3219848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36053 <nil> <nil>}
	I1217 12:00:44.946104 3219848 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 12:00:44.946864 3219848 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 12:00:48.084367 3219848 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-669680
	
	I1217 12:00:48.084399 3219848 ubuntu.go:182] provisioning hostname "newest-cni-669680"
	I1217 12:00:48.084507 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:48.104367 3219848 main.go:143] libmachine: Using SSH client type: native
	I1217 12:00:48.104656 3219848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36053 <nil> <nil>}
	I1217 12:00:48.104680 3219848 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-669680 && echo "newest-cni-669680" | sudo tee /etc/hostname
	I1217 12:00:48.247265 3219848 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-669680
	
	I1217 12:00:48.247353 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:48.270652 3219848 main.go:143] libmachine: Using SSH client type: native
	I1217 12:00:48.270788 3219848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36053 <nil> <nil>}
	I1217 12:00:48.270817 3219848 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-669680' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-669680/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-669680' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 12:00:48.417473 3219848 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 12:00:48.417557 3219848 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 12:00:48.417596 3219848 ubuntu.go:190] setting up certificates
	I1217 12:00:48.417639 3219848 provision.go:84] configureAuth start
	I1217 12:00:48.417749 3219848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 12:00:48.437471 3219848 provision.go:143] copyHostCerts
	I1217 12:00:48.437568 3219848 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 12:00:48.437587 3219848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 12:00:48.437717 3219848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 12:00:48.437858 3219848 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 12:00:48.437877 3219848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 12:00:48.437916 3219848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 12:00:48.438005 3219848 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 12:00:48.438028 3219848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 12:00:48.438055 3219848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 12:00:48.438157 3219848 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.newest-cni-669680 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-669680]
	I1217 12:00:48.577436 3219848 provision.go:177] copyRemoteCerts
	I1217 12:00:48.577506 3219848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 12:00:48.577546 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:48.595338 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:48.692538 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 12:00:48.711734 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 12:00:48.729881 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 12:00:48.748237 3219848 provision.go:87] duration metric: took 330.555362ms to configureAuth
	I1217 12:00:48.748262 3219848 ubuntu.go:206] setting minikube options for container-runtime
	I1217 12:00:48.748550 3219848 config.go:182] Loaded profile config "newest-cni-669680": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 12:00:48.748561 3219848 machine.go:97] duration metric: took 3.830976751s to provisionDockerMachine
	I1217 12:00:48.748569 3219848 start.go:293] postStartSetup for "newest-cni-669680" (driver="docker")
	I1217 12:00:48.748581 3219848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 12:00:48.748643 3219848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 12:00:48.748683 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:48.766578 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:48.864654 3219848 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 12:00:48.868220 3219848 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 12:00:48.868249 3219848 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 12:00:48.868261 3219848 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 12:00:48.868318 3219848 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 12:00:48.868401 3219848 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 12:00:48.868523 3219848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 12:00:48.876210 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 12:00:48.894408 3219848 start.go:296] duration metric: took 145.823675ms for postStartSetup
	I1217 12:00:48.894507 3219848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 12:00:48.894563 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:48.913872 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:49.010734 3219848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 12:00:49.017136 3219848 fix.go:56] duration metric: took 4.414624566s for fixHost
	I1217 12:00:49.017182 3219848 start.go:83] releasing machines lock for "newest-cni-669680", held for 4.414721098s
	I1217 12:00:49.017319 3219848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 12:00:49.041576 3219848 ssh_runner.go:195] Run: cat /version.json
	I1217 12:00:49.041642 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:49.041898 3219848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 12:00:49.041972 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:49.071567 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:49.072178 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:49.261249 3219848 ssh_runner.go:195] Run: systemctl --version
	I1217 12:00:49.267897 3219848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 12:00:49.272503 3219848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 12:00:49.272574 3219848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 12:00:49.280715 3219848 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 12:00:49.280743 3219848 start.go:496] detecting cgroup driver to use...
	I1217 12:00:49.280787 3219848 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 12:00:49.280844 3219848 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 12:00:49.298858 3219848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 12:00:49.313120 3219848 docker.go:218] disabling cri-docker service (if available) ...
	I1217 12:00:49.313230 3219848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 12:00:49.329245 3219848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 12:00:49.342531 3219848 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 12:00:49.461223 3219848 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 12:00:49.579409 3219848 docker.go:234] disabling docker service ...
	I1217 12:00:49.579510 3219848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 12:00:49.594800 3219848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 12:00:49.608313 3219848 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 12:00:49.737460 3219848 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 12:00:49.883222 3219848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 12:00:49.897339 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 12:00:49.911914 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 12:00:49.921268 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 12:00:49.930257 3219848 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 12:00:49.930398 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 12:00:49.939639 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 12:00:49.948689 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 12:00:49.958342 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 12:00:49.967395 3219848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 12:00:49.975730 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 12:00:49.984582 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 12:00:49.993553 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 12:00:50.009983 3219848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 12:00:50.019753 3219848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 12:00:50.028837 3219848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:00:50.142686 3219848 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 12:00:50.264183 3219848 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 12:00:50.264308 3219848 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 12:00:50.268160 3219848 start.go:564] Will wait 60s for crictl version
	I1217 12:00:50.268261 3219848 ssh_runner.go:195] Run: which crictl
	I1217 12:00:50.271790 3219848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 12:00:50.298148 3219848 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 12:00:50.298258 3219848 ssh_runner.go:195] Run: containerd --version
	I1217 12:00:50.318643 3219848 ssh_runner.go:195] Run: containerd --version
	I1217 12:00:50.346609 3219848 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1217 12:00:50.349545 3219848 cli_runner.go:164] Run: docker network inspect newest-cni-669680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 12:00:50.366603 3219848 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 12:00:50.370482 3219848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 12:00:50.383622 3219848 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 12:00:50.386526 3219848 kubeadm.go:884] updating cluster {Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 12:00:50.386672 3219848 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 12:00:50.386774 3219848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:00:50.415106 3219848 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 12:00:50.415132 3219848 containerd.go:534] Images already preloaded, skipping extraction
	I1217 12:00:50.415224 3219848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:00:50.444492 3219848 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 12:00:50.444517 3219848 cache_images.go:86] Images are preloaded, skipping loading
	I1217 12:00:50.444526 3219848 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1217 12:00:50.444639 3219848 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-669680 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 12:00:50.444718 3219848 ssh_runner.go:195] Run: sudo crictl info
	I1217 12:00:50.471453 3219848 cni.go:84] Creating CNI manager for ""
	I1217 12:00:50.471478 3219848 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 12:00:50.471497 3219848 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 12:00:50.471553 3219848 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-669680 NodeName:newest-cni-669680 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 12:00:50.471711 3219848 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-669680"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 12:00:50.471828 3219848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 12:00:50.480867 3219848 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 12:00:50.480998 3219848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 12:00:50.488686 3219848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1217 12:00:50.504356 3219848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 12:00:50.520176 3219848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1217 12:00:50.535930 3219848 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 12:00:50.540134 3219848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 12:00:50.550629 3219848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:00:50.669384 3219848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 12:00:50.685420 3219848 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680 for IP: 192.168.76.2
	I1217 12:00:50.685479 3219848 certs.go:195] generating shared ca certs ...
	I1217 12:00:50.685497 3219848 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:00:50.685634 3219848 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 12:00:50.685683 3219848 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 12:00:50.685690 3219848 certs.go:257] generating profile certs ...
	I1217 12:00:50.685787 3219848 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.key
	I1217 12:00:50.685851 3219848 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key.e7646161
	I1217 12:00:50.685893 3219848 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key
	I1217 12:00:50.686084 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 12:00:50.686149 3219848 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 12:00:50.686177 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 12:00:50.686225 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 12:00:50.686286 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 12:00:50.686340 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 12:00:50.686422 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 12:00:50.687047 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 12:00:50.710384 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 12:00:50.730920 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 12:00:50.751265 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 12:00:50.772018 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 12:00:50.790833 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 12:00:50.810114 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 12:00:50.828402 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 12:00:50.846753 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 12:00:50.865705 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 12:00:50.886567 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 12:00:50.904533 3219848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 12:00:50.917457 3219848 ssh_runner.go:195] Run: openssl version
	I1217 12:00:50.923993 3219848 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:00:50.931839 3219848 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 12:00:50.939507 3219848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:00:50.943237 3219848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:00:50.943304 3219848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:00:50.984637 3219848 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 12:00:50.992168 3219848 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 12:00:50.999795 3219848 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 12:00:51.020372 3219848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 12:00:51.024379 3219848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 12:00:51.024566 3219848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 12:00:51.066006 3219848 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 12:00:51.074211 3219848 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 12:00:51.082049 3219848 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 12:00:51.090651 3219848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 12:00:51.094888 3219848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 12:00:51.095004 3219848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 12:00:51.137313 3219848 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 12:00:51.145186 3219848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 12:00:51.149385 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 12:00:51.191456 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 12:00:51.232840 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 12:00:51.275219 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 12:00:51.317313 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 12:00:51.358746 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 12:00:51.399851 3219848 kubeadm.go:401] StartCluster: {Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:00:51.399946 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 12:00:51.400058 3219848 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 12:00:51.427405 3219848 cri.go:89] found id: ""
	I1217 12:00:51.427480 3219848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 12:00:51.435564 3219848 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 12:00:51.435593 3219848 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 12:00:51.435648 3219848 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 12:00:51.443379 3219848 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 12:00:51.443986 3219848 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-669680" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 12:00:51.444236 3219848 kubeconfig.go:62] /home/jenkins/minikube-integration/22182-2922712/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-669680" cluster setting kubeconfig missing "newest-cni-669680" context setting]
	I1217 12:00:51.444696 3219848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:00:51.446096 3219848 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 12:00:51.454141 3219848 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1217 12:00:51.454214 3219848 kubeadm.go:602] duration metric: took 18.613293ms to restartPrimaryControlPlane
	I1217 12:00:51.454230 3219848 kubeadm.go:403] duration metric: took 54.392206ms to StartCluster
	I1217 12:00:51.454245 3219848 settings.go:142] acquiring lock: {Name:mkbe14d68dd5ff3fa1749157c50305d115f5a24d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:00:51.454304 3219848 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 12:00:51.455245 3219848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:00:51.455481 3219848 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 12:00:51.455797 3219848 config.go:182] Loaded profile config "newest-cni-669680": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 12:00:51.455846 3219848 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 12:00:51.455911 3219848 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-669680"
	I1217 12:00:51.455924 3219848 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-669680"
	I1217 12:00:51.455953 3219848 host.go:66] Checking if "newest-cni-669680" exists ...
	I1217 12:00:51.456410 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:51.456591 3219848 addons.go:70] Setting dashboard=true in profile "newest-cni-669680"
	I1217 12:00:51.457002 3219848 addons.go:239] Setting addon dashboard=true in "newest-cni-669680"
	W1217 12:00:51.457012 3219848 addons.go:248] addon dashboard should already be in state true
	I1217 12:00:51.457034 3219848 host.go:66] Checking if "newest-cni-669680" exists ...
	I1217 12:00:51.457458 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:51.456605 3219848 addons.go:70] Setting default-storageclass=true in profile "newest-cni-669680"
	I1217 12:00:51.458033 3219848 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-669680"
	I1217 12:00:51.458306 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:51.460659 3219848 out.go:179] * Verifying Kubernetes components...
	I1217 12:00:51.463611 3219848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:00:51.495379 3219848 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 12:00:51.502753 3219848 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:00:51.502777 3219848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 12:00:51.502845 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:51.511997 3219848 addons.go:239] Setting addon default-storageclass=true in "newest-cni-669680"
	I1217 12:00:51.512038 3219848 host.go:66] Checking if "newest-cni-669680" exists ...
	I1217 12:00:51.512543 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:51.527586 3219848 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 12:00:51.536600 3219848 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1217 12:00:49.260592 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:51.760613 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:00:51.539513 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 12:00:51.539539 3219848 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 12:00:51.539612 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:51.555471 3219848 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 12:00:51.555502 3219848 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 12:00:51.555570 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:51.569622 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:51.592016 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:51.601832 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:51.689678 3219848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 12:00:51.731294 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:00:51.749491 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:00:51.814469 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 12:00:51.814496 3219848 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 12:00:51.839602 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 12:00:51.839672 3219848 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 12:00:51.852764 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 12:00:51.852827 3219848 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 12:00:51.865089 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 12:00:51.865152 3219848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 12:00:51.878190 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 12:00:51.878259 3219848 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 12:00:51.890831 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 12:00:51.890854 3219848 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 12:00:51.903270 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 12:00:51.903294 3219848 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 12:00:51.916127 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 12:00:51.916153 3219848 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 12:00:51.929059 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 12:00:51.929123 3219848 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 12:00:51.942273 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:52.502896 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.502968 3219848 retry.go:31] will retry after 269.884821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:00:52.503026 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.503067 3219848 retry.go:31] will retry after 319.702383ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.503040 3219848 api_server.go:52] waiting for apiserver process to appear ...
	I1217 12:00:52.503258 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:00:52.503300 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.503321 3219848 retry.go:31] will retry after 196.810414ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.700893 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:52.770562 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.770599 3219848 retry.go:31] will retry after 481.518663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.773838 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:00:52.823221 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:00:52.855276 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.855328 3219848 retry.go:31] will retry after 391.667259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:00:52.894877 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.894917 3219848 retry.go:31] will retry after 200.928151ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.004579 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:53.096394 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:00:53.155868 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.155897 3219848 retry.go:31] will retry after 564.238822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.248228 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:00:53.253066 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:53.368787 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.368822 3219848 retry.go:31] will retry after 377.070742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:00:53.369052 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.369071 3219848 retry.go:31] will retry after 485.691157ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.504052 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:53.720468 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:00:53.746162 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:00:53.794993 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.795027 3219848 retry.go:31] will retry after 872.052872ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:00:53.811480 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.811533 3219848 retry.go:31] will retry after 558.92589ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.855758 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:53.922708 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.922745 3219848 retry.go:31] will retry after 803.451465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:54.003704 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:00:54.260476 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:56.760549 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:00:54.370776 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:00:54.437621 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:54.437652 3219848 retry.go:31] will retry after 1.190014231s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:54.503835 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:54.667963 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:00:54.726498 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:54.728210 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:54.728287 3219848 retry.go:31] will retry after 1.413986656s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:00:54.813279 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:54.813372 3219848 retry.go:31] will retry after 1.840693776s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:55.005986 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:55.504112 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:55.628242 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:00:55.689054 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:55.689136 3219848 retry.go:31] will retry after 1.799425819s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:56.003624 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:56.142943 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:00:56.205592 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:56.205625 3219848 retry.go:31] will retry after 2.655712888s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:56.503981 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:56.654730 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:56.717604 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:56.717641 3219848 retry.go:31] will retry after 1.909418395s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:57.004223 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:57.489437 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:00:57.503984 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:00:57.562808 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:57.562840 3219848 retry.go:31] will retry after 3.72719526s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:58.014740 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:58.503409 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:58.627253 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:58.690443 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:58.690481 3219848 retry.go:31] will retry after 3.549926007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:58.861704 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:00:58.923654 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:58.923683 3219848 retry.go:31] will retry after 2.058003245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:59.003967 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:00:59.260028 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:01.761273 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:00:59.504167 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:00.018808 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:00.504031 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:00.982724 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:01:01.004335 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:01.111365 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:01.111399 3219848 retry.go:31] will retry after 3.900095446s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:01.291002 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:01:01.368946 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:01.368996 3219848 retry.go:31] will retry after 3.675584678s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:01.503381 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:02.004403 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:02.241403 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:01:02.307939 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:02.307978 3219848 retry.go:31] will retry after 5.738469139s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:02.504084 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:03.003562 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:03.503472 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:04.005140 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:04.259626 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:06.260640 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:08.759809 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:04.503830 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:05.003702 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:05.012660 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:01:05.045335 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:01:05.083423 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:05.083461 3219848 retry.go:31] will retry after 9.235586003s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:01:05.118369 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:05.118401 3219848 retry.go:31] will retry after 3.828272571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:05.503857 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:06.003637 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:06.504078 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:07.003401 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:07.503344 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:08.004170 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:08.047658 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:01:08.113675 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:08.113710 3219848 retry.go:31] will retry after 7.390134832s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:08.504355 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:08.946950 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:01:09.003509 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:09.011595 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:09.011629 3219848 retry.go:31] will retry after 14.170665244s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:01:11.259781 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:13.760361 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:09.503956 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:10.018957 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:10.503456 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:11.004169 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:11.503808 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:12.003522 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:12.503603 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:13.003862 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:13.503472 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:14.004363 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:14.319308 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:01:16.260406 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:18.759622 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:14.385208 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:14.385243 3219848 retry.go:31] will retry after 5.459360953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:14.503378 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:15.006355 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:15.504086 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:15.504108 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:01:15.572879 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:15.572915 3219848 retry.go:31] will retry after 11.777794795s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:16.005530 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:16.503503 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:17.003649 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:17.503430 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:18.005004 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:18.504088 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:19.003423 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:20.760668 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:23.259693 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:19.503667 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:19.845708 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:01:19.909350 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:19.909381 3219848 retry.go:31] will retry after 9.722081791s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:20.003736 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:20.503967 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:21.004457 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:21.504148 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:22.003426 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:22.504235 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:23.004166 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:23.183313 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:01:23.244255 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:23.244289 3219848 retry.go:31] will retry after 19.619062537s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:23.503427 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:24.006966 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:25.259753 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:27.759647 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:24.503758 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:25.004125 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:25.503463 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:26.004155 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:26.504576 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:27.003556 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:27.351598 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:01:27.419162 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:27.419195 3219848 retry.go:31] will retry after 15.164194741s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:27.503619 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:28.003385 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:28.503474 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:29.004314 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:29.760524 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:32.259673 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:29.503968 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:29.632290 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:01:29.699987 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:29.700018 3219848 retry.go:31] will retry after 12.658501476s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:30.003430 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:30.503407 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:31.003818 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:31.504094 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:32.003845 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:32.503410 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:33.005413 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:33.503962 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:34.003405 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:34.259722 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:36.759694 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:34.503770 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:35.004969 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:35.504211 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:36.003492 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:36.503881 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:37.008063 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:37.504267 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:38.004154 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:38.504195 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:39.005022 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:39.260642 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:41.759666 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:39.504074 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:40.009459 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:40.504054 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:41.004134 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:41.504134 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:42.003867 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:42.359033 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:01:42.424319 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:42.424350 3219848 retry.go:31] will retry after 39.499798177s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:42.503565 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:42.584549 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:01:42.654579 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:42.654612 3219848 retry.go:31] will retry after 22.182784721s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:42.864124 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:01:42.925874 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:42.925916 3219848 retry.go:31] will retry after 18.241160237s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:43.004102 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:43.504356 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:44.004028 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:44.259623 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:46.260805 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:48.760674 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:44.503929 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:45.003640 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:45.503747 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:46.003443 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:46.503967 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:47.003372 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:47.503601 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:48.003536 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:48.503987 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:49.003434 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:51.260164 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:53.759783 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:49.504162 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:50.003493 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:50.503875 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:51.004324 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:51.503888 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:01:51.503983 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:01:51.536666 3219848 cri.go:89] found id: ""
	I1217 12:01:51.536689 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.536698 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:01:51.536704 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:01:51.536768 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:01:51.562047 3219848 cri.go:89] found id: ""
	I1217 12:01:51.562070 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.562078 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:01:51.562084 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:01:51.562149 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:01:51.586286 3219848 cri.go:89] found id: ""
	I1217 12:01:51.586309 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.586317 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:01:51.586323 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:01:51.586381 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:01:51.611834 3219848 cri.go:89] found id: ""
	I1217 12:01:51.611858 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.611867 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:01:51.611873 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:01:51.611942 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:01:51.637620 3219848 cri.go:89] found id: ""
	I1217 12:01:51.637643 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.637651 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:01:51.637658 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:01:51.637715 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:01:51.663176 3219848 cri.go:89] found id: ""
	I1217 12:01:51.663198 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.663206 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:01:51.663212 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:01:51.663273 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:01:51.688038 3219848 cri.go:89] found id: ""
	I1217 12:01:51.688064 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.688083 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:01:51.688090 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:01:51.688159 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:01:51.715834 3219848 cri.go:89] found id: ""
	I1217 12:01:51.715860 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.715870 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:01:51.715879 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:01:51.715890 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:01:51.772533 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:01:51.772567 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:01:51.788370 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:01:51.788400 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:01:51.855552 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:01:51.847275    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.848081    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.849574    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.849998    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.851493    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:01:51.847275    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.848081    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.849574    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.849998    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.851493    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:01:51.855615 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:01:51.855635 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:01:51.880660 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:01:51.880693 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 12:01:56.259727 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:58.760523 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:54.414807 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:54.425488 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:01:54.425558 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:01:54.453841 3219848 cri.go:89] found id: ""
	I1217 12:01:54.453870 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.453880 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:01:54.453887 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:01:54.453946 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:01:54.478957 3219848 cri.go:89] found id: ""
	I1217 12:01:54.478982 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.478991 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:01:54.478998 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:01:54.479060 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:01:54.504488 3219848 cri.go:89] found id: ""
	I1217 12:01:54.504516 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.504535 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:01:54.504543 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:01:54.504606 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:01:54.529418 3219848 cri.go:89] found id: ""
	I1217 12:01:54.529445 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.529454 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:01:54.529460 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:01:54.529519 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:01:54.557757 3219848 cri.go:89] found id: ""
	I1217 12:01:54.557781 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.557790 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:01:54.557797 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:01:54.557854 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:01:54.586961 3219848 cri.go:89] found id: ""
	I1217 12:01:54.586996 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.587004 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:01:54.587011 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:01:54.587077 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:01:54.612590 3219848 cri.go:89] found id: ""
	I1217 12:01:54.612617 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.612626 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:01:54.612633 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:01:54.612694 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:01:54.638207 3219848 cri.go:89] found id: ""
	I1217 12:01:54.638234 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.638243 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:01:54.638253 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:01:54.638264 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:01:54.695917 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:01:54.695955 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:01:54.712729 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:01:54.712759 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:01:54.782298 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:01:54.774102    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.774684    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.776463    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.776850    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.778510    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:01:54.774102    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.774684    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.776463    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.776850    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.778510    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:01:54.782321 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:01:54.782333 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:01:54.807165 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:01:54.807196 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:01:57.336099 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:57.346978 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:01:57.347048 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:01:57.371132 3219848 cri.go:89] found id: ""
	I1217 12:01:57.371155 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.371163 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:01:57.371169 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:01:57.371232 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:01:57.396905 3219848 cri.go:89] found id: ""
	I1217 12:01:57.396933 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.396942 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:01:57.396948 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:01:57.397011 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:01:57.425337 3219848 cri.go:89] found id: ""
	I1217 12:01:57.425366 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.425374 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:01:57.425381 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:01:57.425440 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:01:57.449681 3219848 cri.go:89] found id: ""
	I1217 12:01:57.449709 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.449718 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:01:57.449725 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:01:57.449784 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:01:57.475302 3219848 cri.go:89] found id: ""
	I1217 12:01:57.475328 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.475337 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:01:57.475343 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:01:57.475412 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:01:57.500270 3219848 cri.go:89] found id: ""
	I1217 12:01:57.500344 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.500369 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:01:57.500389 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:01:57.500509 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:01:57.527492 3219848 cri.go:89] found id: ""
	I1217 12:01:57.527519 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.527532 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:01:57.527538 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:01:57.527650 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:01:57.553482 3219848 cri.go:89] found id: ""
	I1217 12:01:57.553549 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.553576 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:01:57.553602 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:01:57.553627 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:01:57.609257 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:01:57.609292 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:01:57.625325 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:01:57.625352 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:01:57.691022 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:01:57.682604    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.683106    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.684793    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.685506    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.687043    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:01:57.682604    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.683106    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.684793    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.685506    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.687043    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:01:57.691048 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:01:57.691061 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:01:57.716301 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:01:57.716333 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 12:02:01.260216 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:02:02.764189 3212985 node_ready.go:38] duration metric: took 6m0.005070756s for node "no-preload-118262" to be "Ready" ...
	I1217 12:02:02.767452 3212985 out.go:203] 
	W1217 12:02:02.770608 3212985 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 12:02:02.770638 3212985 out.go:285] * 
	W1217 12:02:02.772986 3212985 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 12:02:02.776078 3212985 out.go:203] 
	I1217 12:02:00.244802 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:00.315692 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:00.315780 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:00.376798 3219848 cri.go:89] found id: ""
	I1217 12:02:00.376842 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.376852 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:00.376859 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:00.376949 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:00.414474 3219848 cri.go:89] found id: ""
	I1217 12:02:00.414502 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.414513 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:00.414520 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:00.414590 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:00.447266 3219848 cri.go:89] found id: ""
	I1217 12:02:00.447306 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.447316 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:00.447323 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:00.447415 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:00.477352 3219848 cri.go:89] found id: ""
	I1217 12:02:00.477378 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.477387 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:00.477394 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:00.477457 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:00.506577 3219848 cri.go:89] found id: ""
	I1217 12:02:00.506605 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.506614 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:00.506621 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:00.506720 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:00.533943 3219848 cri.go:89] found id: ""
	I1217 12:02:00.533966 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.533975 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:00.533982 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:00.534051 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:00.560396 3219848 cri.go:89] found id: ""
	I1217 12:02:00.560462 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.560472 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:00.560479 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:00.560573 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:00.587859 3219848 cri.go:89] found id: ""
	I1217 12:02:00.587931 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.587955 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:00.587983 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:00.588035 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:00.620134 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:00.620217 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:00.677187 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:00.677223 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:00.694138 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:00.694242 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:00.762938 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:00.753622    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.754338    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.755466    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.757073    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.757631    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:00.753622    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.754338    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.755466    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.757073    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.757631    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:00.763025 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:00.763058 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:01.167394 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:02:01.232118 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:02:01.232151 3219848 retry.go:31] will retry after 39.797194994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:02:03.292559 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:03.304708 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:03.304784 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:03.332491 3219848 cri.go:89] found id: ""
	I1217 12:02:03.332511 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.332519 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:03.332526 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:03.332630 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:03.361080 3219848 cri.go:89] found id: ""
	I1217 12:02:03.361107 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.361115 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:03.361121 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:03.361179 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:03.397354 3219848 cri.go:89] found id: ""
	I1217 12:02:03.397382 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.397391 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:03.397397 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:03.397473 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:03.431465 3219848 cri.go:89] found id: ""
	I1217 12:02:03.431493 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.431502 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:03.431509 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:03.431569 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:03.464102 3219848 cri.go:89] found id: ""
	I1217 12:02:03.464125 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.464133 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:03.464139 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:03.464197 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:03.497848 3219848 cri.go:89] found id: ""
	I1217 12:02:03.497879 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.497888 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:03.497895 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:03.497952 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:03.568108 3219848 cri.go:89] found id: ""
	I1217 12:02:03.568130 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.568139 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:03.568144 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:03.568202 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:03.632108 3219848 cri.go:89] found id: ""
	I1217 12:02:03.632136 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.632151 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:03.632161 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:03.632173 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:03.724972 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:03.708641    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.709073    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.716627    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.717278    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.719035    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:03.708641    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.709073    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.716627    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.717278    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.719035    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:03.725000 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:03.725012 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:03.753083 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:03.753174 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:03.790574 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:03.790596 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:03.863404 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:03.863488 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:04.837606 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:02:04.901525 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:02:04.901562 3219848 retry.go:31] will retry after 21.256241349s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:02:06.385200 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:06.395642 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:06.395734 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:06.422500 3219848 cri.go:89] found id: ""
	I1217 12:02:06.422526 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.422535 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:06.422542 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:06.422603 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:06.449741 3219848 cri.go:89] found id: ""
	I1217 12:02:06.449763 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.449773 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:06.449779 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:06.449836 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:06.478823 3219848 cri.go:89] found id: ""
	I1217 12:02:06.478844 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.478852 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:06.478858 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:06.478924 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:06.507270 3219848 cri.go:89] found id: ""
	I1217 12:02:06.507298 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.507307 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:06.507313 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:06.507390 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:06.536741 3219848 cri.go:89] found id: ""
	I1217 12:02:06.536774 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.536783 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:06.536790 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:06.536859 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:06.569124 3219848 cri.go:89] found id: ""
	I1217 12:02:06.569152 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.569161 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:06.569168 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:06.569223 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:06.597119 3219848 cri.go:89] found id: ""
	I1217 12:02:06.597140 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.597148 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:06.597155 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:06.597213 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:06.623129 3219848 cri.go:89] found id: ""
	I1217 12:02:06.623152 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.623161 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:06.623171 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:06.623181 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:06.679634 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:06.679669 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:06.696235 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:06.696273 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:06.764004 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:06.755277    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.755704    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.757595    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.758654    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.760132    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:06.755277    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.755704    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.757595    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.758654    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.760132    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:06.764031 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:06.764044 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:06.789440 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:06.789478 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:09.319544 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:09.335051 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:09.335144 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:09.363250 3219848 cri.go:89] found id: ""
	I1217 12:02:09.363278 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.363288 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:09.363296 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:09.363357 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:09.387533 3219848 cri.go:89] found id: ""
	I1217 12:02:09.387598 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.387624 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:09.387646 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:09.387735 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:09.411943 3219848 cri.go:89] found id: ""
	I1217 12:02:09.411970 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.411978 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:09.411985 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:09.412042 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:09.438061 3219848 cri.go:89] found id: ""
	I1217 12:02:09.438127 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.438151 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:09.438167 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:09.438250 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:09.463378 3219848 cri.go:89] found id: ""
	I1217 12:02:09.463407 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.463415 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:09.463422 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:09.463481 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:09.494069 3219848 cri.go:89] found id: ""
	I1217 12:02:09.494098 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.494107 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:09.494114 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:09.494178 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:09.526694 3219848 cri.go:89] found id: ""
	I1217 12:02:09.526771 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.526795 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:09.526815 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:09.526923 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:09.553523 3219848 cri.go:89] found id: ""
	I1217 12:02:09.553585 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.553616 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:09.553641 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:09.553678 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:09.618427 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:09.618463 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:09.634212 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:09.634244 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:09.696895 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:09.688293    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.688801    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.690481    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.690806    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.692990    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:09.688293    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.688801    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.690481    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.690806    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.692990    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:09.696914 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:09.696926 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:09.722288 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:09.722324 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:12.249861 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:12.261558 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:12.261626 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:12.293092 3219848 cri.go:89] found id: ""
	I1217 12:02:12.293113 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.293121 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:12.293128 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:12.293188 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:12.319347 3219848 cri.go:89] found id: ""
	I1217 12:02:12.319374 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.319384 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:12.319390 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:12.319448 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:12.343912 3219848 cri.go:89] found id: ""
	I1217 12:02:12.343939 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.343948 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:12.343955 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:12.344013 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:12.370544 3219848 cri.go:89] found id: ""
	I1217 12:02:12.370571 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.370581 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:12.370587 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:12.370645 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:12.397552 3219848 cri.go:89] found id: ""
	I1217 12:02:12.397578 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.397587 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:12.397593 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:12.397652 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:12.421606 3219848 cri.go:89] found id: ""
	I1217 12:02:12.421673 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.421699 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:12.421715 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:12.421791 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:12.447065 3219848 cri.go:89] found id: ""
	I1217 12:02:12.447088 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.447097 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:12.447103 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:12.447169 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:12.473547 3219848 cri.go:89] found id: ""
	I1217 12:02:12.473575 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.473583 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:12.473645 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:12.473670 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:12.489529 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:12.489559 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:12.574945 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:12.562789    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.567073    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.567687    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.569241    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.569901    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:12.562789    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.567073    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.567687    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.569241    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.569901    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:12.574970 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:12.574986 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:12.601521 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:12.601562 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:12.633893 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:12.633920 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:15.190960 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:15.202334 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:15.202461 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:15.231453 3219848 cri.go:89] found id: ""
	I1217 12:02:15.231486 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.231495 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:15.231507 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:15.231609 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:15.264097 3219848 cri.go:89] found id: ""
	I1217 12:02:15.264120 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.264129 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:15.264135 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:15.264196 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:15.293547 3219848 cri.go:89] found id: ""
	I1217 12:02:15.293574 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.293583 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:15.293589 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:15.293650 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:15.321905 3219848 cri.go:89] found id: ""
	I1217 12:02:15.321968 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.321991 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:15.322013 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:15.322084 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:15.349052 3219848 cri.go:89] found id: ""
	I1217 12:02:15.349085 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.349095 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:15.349102 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:15.349175 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:15.374350 3219848 cri.go:89] found id: ""
	I1217 12:02:15.374377 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.374387 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:15.374394 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:15.374457 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:15.412039 3219848 cri.go:89] found id: ""
	I1217 12:02:15.412066 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.412075 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:15.412082 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:15.412153 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:15.441228 3219848 cri.go:89] found id: ""
	I1217 12:02:15.441255 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.441265 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:15.441274 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:15.441309 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:15.467564 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:15.467601 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:15.501031 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:15.501100 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:15.564025 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:15.564059 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:15.581879 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:15.581906 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:15.647244 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:15.638661    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.639327    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.641006    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.641615    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.643194    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:15.638661    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.639327    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.641006    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.641615    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.643194    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:18.147543 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:18.158738 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:18.158817 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:18.184828 3219848 cri.go:89] found id: ""
	I1217 12:02:18.184853 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.184862 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:18.184869 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:18.184931 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:18.211904 3219848 cri.go:89] found id: ""
	I1217 12:02:18.211935 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.211944 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:18.211950 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:18.212010 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:18.237088 3219848 cri.go:89] found id: ""
	I1217 12:02:18.237154 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.237170 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:18.237177 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:18.237239 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:18.278916 3219848 cri.go:89] found id: ""
	I1217 12:02:18.278943 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.278953 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:18.278960 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:18.279018 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:18.307105 3219848 cri.go:89] found id: ""
	I1217 12:02:18.307133 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.307143 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:18.307150 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:18.307210 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:18.336099 3219848 cri.go:89] found id: ""
	I1217 12:02:18.336132 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.336141 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:18.336148 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:18.336217 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:18.362366 3219848 cri.go:89] found id: ""
	I1217 12:02:18.362432 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.362456 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:18.362472 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:18.362547 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:18.388125 3219848 cri.go:89] found id: ""
	I1217 12:02:18.388151 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.388160 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:18.388169 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:18.388180 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:18.456052 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:18.446941    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.447634    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.449296    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.449839    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.451474    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:18.446941    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.447634    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.449296    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.449839    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.451474    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:18.456114 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:18.456134 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:18.481868 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:18.481899 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:18.525523 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:18.525600 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:18.594163 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:18.594200 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:21.113595 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:21.124720 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:21.124792 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:21.150373 3219848 cri.go:89] found id: ""
	I1217 12:02:21.150397 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.150406 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:21.150412 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:21.150471 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:21.179044 3219848 cri.go:89] found id: ""
	I1217 12:02:21.179069 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.179078 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:21.179085 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:21.179156 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:21.205105 3219848 cri.go:89] found id: ""
	I1217 12:02:21.205132 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.205141 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:21.205147 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:21.205207 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:21.230210 3219848 cri.go:89] found id: ""
	I1217 12:02:21.230235 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.230243 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:21.230251 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:21.230328 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:21.265026 3219848 cri.go:89] found id: ""
	I1217 12:02:21.265052 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.265061 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:21.265068 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:21.265128 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:21.302976 3219848 cri.go:89] found id: ""
	I1217 12:02:21.303002 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.303017 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:21.303025 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:21.303097 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:21.333258 3219848 cri.go:89] found id: ""
	I1217 12:02:21.333282 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.333292 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:21.333299 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:21.333361 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:21.359283 3219848 cri.go:89] found id: ""
	I1217 12:02:21.359308 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.359317 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:21.359327 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:21.359338 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:21.416901 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:21.416944 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:21.433045 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:21.433074 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:21.505849 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:21.494474    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.495106    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.496640    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.497253    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.500699    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:21.494474    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.495106    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.496640    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.497253    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.500699    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:21.505920 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:21.505948 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:21.534970 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:21.535156 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:21.925292 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:02:21.990437 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:02:21.990546 3219848 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 12:02:24.077604 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:24.089001 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:24.089072 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:24.120652 3219848 cri.go:89] found id: ""
	I1217 12:02:24.120677 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.120688 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:24.120695 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:24.120755 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:24.147236 3219848 cri.go:89] found id: ""
	I1217 12:02:24.147263 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.147273 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:24.147280 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:24.147339 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:24.173122 3219848 cri.go:89] found id: ""
	I1217 12:02:24.173147 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.173157 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:24.173163 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:24.173223 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:24.207220 3219848 cri.go:89] found id: ""
	I1217 12:02:24.207243 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.207253 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:24.207259 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:24.207324 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:24.232981 3219848 cri.go:89] found id: ""
	I1217 12:02:24.233004 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.233013 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:24.233020 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:24.233087 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:24.266790 3219848 cri.go:89] found id: ""
	I1217 12:02:24.266815 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.266825 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:24.266832 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:24.266896 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:24.299029 3219848 cri.go:89] found id: ""
	I1217 12:02:24.299056 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.299065 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:24.299072 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:24.299150 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:24.332940 3219848 cri.go:89] found id: ""
	I1217 12:02:24.332966 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.332975 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:24.332984 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:24.332994 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:24.358486 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:24.358520 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:24.395087 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:24.395119 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:24.453543 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:24.453581 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:24.469070 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:24.469100 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:24.547838 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:24.537508    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.538320    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.540311    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.542086    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.542713    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:24.537508    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.538320    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.540311    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.542086    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.542713    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:26.158720 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:02:26.235734 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:02:26.235852 3219848 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 12:02:27.048020 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:27.058730 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:27.058803 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:27.083792 3219848 cri.go:89] found id: ""
	I1217 12:02:27.083815 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.083824 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:27.083831 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:27.083893 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:27.110794 3219848 cri.go:89] found id: ""
	I1217 12:02:27.110820 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.110841 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:27.110865 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:27.110940 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:27.136730 3219848 cri.go:89] found id: ""
	I1217 12:02:27.136760 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.136768 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:27.136775 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:27.136833 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:27.161755 3219848 cri.go:89] found id: ""
	I1217 12:02:27.161780 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.161813 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:27.161819 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:27.161886 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:27.187885 3219848 cri.go:89] found id: ""
	I1217 12:02:27.187912 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.187921 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:27.187928 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:27.187987 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:27.214398 3219848 cri.go:89] found id: ""
	I1217 12:02:27.214424 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.214432 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:27.214440 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:27.214528 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:27.240617 3219848 cri.go:89] found id: ""
	I1217 12:02:27.240642 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.240652 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:27.240658 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:27.240740 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:27.272907 3219848 cri.go:89] found id: ""
	I1217 12:02:27.272985 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.273008 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:27.273034 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:27.273061 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:27.338834 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:27.338872 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:27.355488 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:27.355518 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:27.425201 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:27.415325    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.415952    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.418308    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.419305    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.420311    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:27.415325    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.415952    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.418308    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.419305    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.420311    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:27.425231 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:27.425245 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:27.451232 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:27.451264 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:29.988282 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:29.998906 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:29.998982 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:30.032593 3219848 cri.go:89] found id: ""
	I1217 12:02:30.032619 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.032628 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:30.032635 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:30.032703 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:30.065200 3219848 cri.go:89] found id: ""
	I1217 12:02:30.065230 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.065239 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:30.065247 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:30.065319 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:30.100730 3219848 cri.go:89] found id: ""
	I1217 12:02:30.100758 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.100767 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:30.100773 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:30.100837 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:30.127247 3219848 cri.go:89] found id: ""
	I1217 12:02:30.127273 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.127293 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:30.127299 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:30.127380 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:30.156586 3219848 cri.go:89] found id: ""
	I1217 12:02:30.156611 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.156619 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:30.156627 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:30.156692 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:30.182150 3219848 cri.go:89] found id: ""
	I1217 12:02:30.182174 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.182215 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:30.182222 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:30.182285 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:30.209339 3219848 cri.go:89] found id: ""
	I1217 12:02:30.209366 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.209376 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:30.209383 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:30.209443 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:30.235224 3219848 cri.go:89] found id: ""
	I1217 12:02:30.235250 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.235259 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:30.235268 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:30.235279 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:30.305932 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:30.297455    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.298291    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.300025    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.300319    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.301797    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:30.297455    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.298291    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.300025    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.300319    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.301797    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:30.305955 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:30.305968 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:30.335249 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:30.335282 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:30.366831 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:30.366859 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:30.423045 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:30.423081 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:32.941855 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:32.953974 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:32.954052 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:32.986211 3219848 cri.go:89] found id: ""
	I1217 12:02:32.986233 3219848 logs.go:282] 0 containers: []
	W1217 12:02:32.986242 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:32.986249 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:32.986333 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:33.015180 3219848 cri.go:89] found id: ""
	I1217 12:02:33.015209 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.015218 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:33.015227 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:33.015292 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:33.043066 3219848 cri.go:89] found id: ""
	I1217 12:02:33.043132 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.043182 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:33.043216 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:33.043303 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:33.070150 3219848 cri.go:89] found id: ""
	I1217 12:02:33.070178 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.070187 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:33.070194 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:33.070254 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:33.099464 3219848 cri.go:89] found id: ""
	I1217 12:02:33.099502 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.099511 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:33.099519 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:33.099592 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:33.125134 3219848 cri.go:89] found id: ""
	I1217 12:02:33.125161 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.125170 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:33.125177 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:33.125238 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:33.152585 3219848 cri.go:89] found id: ""
	I1217 12:02:33.152608 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.152617 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:33.152638 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:33.152703 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:33.177715 3219848 cri.go:89] found id: ""
	I1217 12:02:33.177740 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.177749 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:33.177759 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:33.177770 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:33.234986 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:33.235024 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:33.255146 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:33.255186 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:33.339613 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:33.330741    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.331497    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.333272    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.333726    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.334950    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:33.330741    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.331497    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.333272    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.333726    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.334950    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:33.339647 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:33.339660 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:33.366064 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:33.366101 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:35.894549 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:35.904950 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:35.905022 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:35.933462 3219848 cri.go:89] found id: ""
	I1217 12:02:35.933485 3219848 logs.go:282] 0 containers: []
	W1217 12:02:35.933493 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:35.933499 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:35.933558 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:35.958161 3219848 cri.go:89] found id: ""
	I1217 12:02:35.958228 3219848 logs.go:282] 0 containers: []
	W1217 12:02:35.958254 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:35.958275 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:35.958364 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:35.983016 3219848 cri.go:89] found id: ""
	I1217 12:02:35.983041 3219848 logs.go:282] 0 containers: []
	W1217 12:02:35.983051 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:35.983057 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:35.983126 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:36.015482 3219848 cri.go:89] found id: ""
	I1217 12:02:36.015527 3219848 logs.go:282] 0 containers: []
	W1217 12:02:36.015536 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:36.015543 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:36.015620 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:36.046357 3219848 cri.go:89] found id: ""
	I1217 12:02:36.046393 3219848 logs.go:282] 0 containers: []
	W1217 12:02:36.046406 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:36.046416 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:36.046577 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:36.072553 3219848 cri.go:89] found id: ""
	I1217 12:02:36.072587 3219848 logs.go:282] 0 containers: []
	W1217 12:02:36.072596 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:36.072602 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:36.072662 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:36.099878 3219848 cri.go:89] found id: ""
	I1217 12:02:36.099911 3219848 logs.go:282] 0 containers: []
	W1217 12:02:36.099927 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:36.099934 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:36.100024 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:36.129180 3219848 cri.go:89] found id: ""
	I1217 12:02:36.129203 3219848 logs.go:282] 0 containers: []
	W1217 12:02:36.129212 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:36.129221 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:36.129234 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:36.186216 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:36.186254 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:36.203136 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:36.203166 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:36.273412 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:36.264653    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.265536    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.267226    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.267782    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.269421    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:36.264653    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.265536    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.267226    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.267782    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.269421    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:36.273433 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:36.273446 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:36.300346 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:36.300378 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:38.840293 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:38.851323 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:38.851395 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:38.878324 3219848 cri.go:89] found id: ""
	I1217 12:02:38.878347 3219848 logs.go:282] 0 containers: []
	W1217 12:02:38.878356 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:38.878362 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:38.878418 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:38.904803 3219848 cri.go:89] found id: ""
	I1217 12:02:38.904824 3219848 logs.go:282] 0 containers: []
	W1217 12:02:38.904833 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:38.904839 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:38.904897 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:38.929044 3219848 cri.go:89] found id: ""
	I1217 12:02:38.929067 3219848 logs.go:282] 0 containers: []
	W1217 12:02:38.929075 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:38.929081 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:38.929148 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:38.953075 3219848 cri.go:89] found id: ""
	I1217 12:02:38.953101 3219848 logs.go:282] 0 containers: []
	W1217 12:02:38.953109 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:38.953119 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:38.953179 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:38.982538 3219848 cri.go:89] found id: ""
	I1217 12:02:38.982560 3219848 logs.go:282] 0 containers: []
	W1217 12:02:38.982569 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:38.982575 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:38.982634 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:39.009774 3219848 cri.go:89] found id: ""
	I1217 12:02:39.009797 3219848 logs.go:282] 0 containers: []
	W1217 12:02:39.009806 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:39.009813 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:39.009877 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:39.035772 3219848 cri.go:89] found id: ""
	I1217 12:02:39.035848 3219848 logs.go:282] 0 containers: []
	W1217 12:02:39.035872 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:39.035894 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:39.035966 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:39.070261 3219848 cri.go:89] found id: ""
	I1217 12:02:39.070282 3219848 logs.go:282] 0 containers: []
	W1217 12:02:39.070291 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:39.070299 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:39.070311 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:39.086150 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:39.086228 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:39.158855 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:39.150093    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.151044    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.152764    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.153406    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.155059    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:39.150093    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.151044    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.152764    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.153406    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.155059    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:39.158917 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:39.158948 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:39.184120 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:39.184154 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:39.228401 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:39.228446 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:41.030449 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:02:41.099078 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:02:41.099186 3219848 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 12:02:41.102220 3219848 out.go:179] * Enabled addons: 
	I1217 12:02:41.105179 3219848 addons.go:530] duration metric: took 1m49.649331261s for enable addons: enabled=[]
	I1217 12:02:41.789011 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:41.800666 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:41.800741 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:41.831181 3219848 cri.go:89] found id: ""
	I1217 12:02:41.831214 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.831222 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:41.831229 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:41.831292 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:41.855868 3219848 cri.go:89] found id: ""
	I1217 12:02:41.855893 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.855901 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:41.855909 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:41.855970 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:41.880077 3219848 cri.go:89] found id: ""
	I1217 12:02:41.880102 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.880110 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:41.880117 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:41.880174 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:41.904526 3219848 cri.go:89] found id: ""
	I1217 12:02:41.904553 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.904562 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:41.904568 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:41.904630 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:41.930234 3219848 cri.go:89] found id: ""
	I1217 12:02:41.930257 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.930266 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:41.930272 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:41.930329 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:41.958809 3219848 cri.go:89] found id: ""
	I1217 12:02:41.958835 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.958844 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:41.958851 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:41.958909 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:41.983616 3219848 cri.go:89] found id: ""
	I1217 12:02:41.983642 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.983652 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:41.983658 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:41.983723 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:42.011680 3219848 cri.go:89] found id: ""
	I1217 12:02:42.011705 3219848 logs.go:282] 0 containers: []
	W1217 12:02:42.011714 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:42.011725 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:42.011736 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:42.073172 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:42.073215 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:42.092098 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:42.092139 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:42.170615 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:42.158978    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.160329    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.161071    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.163397    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.164052    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:42.158978    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.160329    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.161071    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.163397    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.164052    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:42.170644 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:42.170669 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:42.200096 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:42.200137 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:44.738108 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:44.751949 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:44.752049 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:44.785830 3219848 cri.go:89] found id: ""
	I1217 12:02:44.785869 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.785902 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:44.785911 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:44.785988 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:44.815102 3219848 cri.go:89] found id: ""
	I1217 12:02:44.815138 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.815148 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:44.815154 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:44.815256 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:44.843623 3219848 cri.go:89] found id: ""
	I1217 12:02:44.843658 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.843667 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:44.843674 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:44.843768 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:44.868589 3219848 cri.go:89] found id: ""
	I1217 12:02:44.868612 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.868620 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:44.868626 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:44.868710 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:44.893731 3219848 cri.go:89] found id: ""
	I1217 12:02:44.893757 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.893767 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:44.893774 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:44.893877 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:44.920703 3219848 cri.go:89] found id: ""
	I1217 12:02:44.920732 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.920741 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:44.920748 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:44.920807 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:44.945270 3219848 cri.go:89] found id: ""
	I1217 12:02:44.945307 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.945317 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:44.945323 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:44.945390 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:44.974571 3219848 cri.go:89] found id: ""
	I1217 12:02:44.974669 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.974693 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:44.974723 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:44.974767 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:45.011160 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:45.011262 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:45.135210 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:45.135297 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:45.172030 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:45.172125 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:45.299181 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:45.286225    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.288700    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.289610    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.291554    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.292270    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:45.286225    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.288700    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.289610    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.291554    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.292270    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:45.299256 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:45.299270 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:47.834408 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:47.845640 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:47.845713 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:47.875767 3219848 cri.go:89] found id: ""
	I1217 12:02:47.875793 3219848 logs.go:282] 0 containers: []
	W1217 12:02:47.875803 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:47.875809 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:47.875894 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:47.900760 3219848 cri.go:89] found id: ""
	I1217 12:02:47.900798 3219848 logs.go:282] 0 containers: []
	W1217 12:02:47.900808 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:47.900815 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:47.900916 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:47.925606 3219848 cri.go:89] found id: ""
	I1217 12:02:47.925640 3219848 logs.go:282] 0 containers: []
	W1217 12:02:47.925650 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:47.925656 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:47.925730 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:47.953896 3219848 cri.go:89] found id: ""
	I1217 12:02:47.953919 3219848 logs.go:282] 0 containers: []
	W1217 12:02:47.953928 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:47.953935 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:47.954003 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:47.979667 3219848 cri.go:89] found id: ""
	I1217 12:02:47.979736 3219848 logs.go:282] 0 containers: []
	W1217 12:02:47.979759 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:47.979780 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:47.979871 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:48.009398 3219848 cri.go:89] found id: ""
	I1217 12:02:48.009477 3219848 logs.go:282] 0 containers: []
	W1217 12:02:48.009502 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:48.009528 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:48.009630 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:48.039277 3219848 cri.go:89] found id: ""
	I1217 12:02:48.039349 3219848 logs.go:282] 0 containers: []
	W1217 12:02:48.039373 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:48.039400 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:48.039498 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:48.065115 3219848 cri.go:89] found id: ""
	I1217 12:02:48.065140 3219848 logs.go:282] 0 containers: []
	W1217 12:02:48.065151 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:48.065162 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:48.065175 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:48.081650 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:48.081680 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:48.149022 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:48.140864    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.141345    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.142918    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.143402    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.144920    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:48.140864    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.141345    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.142918    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.143402    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.144920    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:48.149046 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:48.149060 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:48.174962 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:48.174999 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:48.204617 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:48.204645 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:50.772582 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:50.784158 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:50.784228 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:50.814532 3219848 cri.go:89] found id: ""
	I1217 12:02:50.814555 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.814563 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:50.814569 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:50.814628 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:50.848966 3219848 cri.go:89] found id: ""
	I1217 12:02:50.848989 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.848997 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:50.849004 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:50.849066 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:50.873257 3219848 cri.go:89] found id: ""
	I1217 12:02:50.873284 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.873293 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:50.873300 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:50.873364 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:50.897538 3219848 cri.go:89] found id: ""
	I1217 12:02:50.897564 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.897573 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:50.897579 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:50.897638 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:50.922912 3219848 cri.go:89] found id: ""
	I1217 12:02:50.922937 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.922946 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:50.922953 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:50.923013 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:50.948094 3219848 cri.go:89] found id: ""
	I1217 12:02:50.948120 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.948129 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:50.948136 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:50.948196 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:50.974087 3219848 cri.go:89] found id: ""
	I1217 12:02:50.974114 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.974124 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:50.974131 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:50.974190 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:51.006127 3219848 cri.go:89] found id: ""
	I1217 12:02:51.006159 3219848 logs.go:282] 0 containers: []
	W1217 12:02:51.006169 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:51.006256 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:51.006275 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:51.032290 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:51.032323 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:51.063443 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:51.063469 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:51.119487 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:51.119523 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:51.138001 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:51.138031 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:51.208764 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:51.200371    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.201009    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.202548    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.202968    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.204568    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:51.200371    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.201009    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.202548    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.202968    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.204568    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:53.709691 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:53.720597 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:53.720678 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:53.745776 3219848 cri.go:89] found id: ""
	I1217 12:02:53.745802 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.745811 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:53.745819 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:53.745878 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:53.775989 3219848 cri.go:89] found id: ""
	I1217 12:02:53.776013 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.776021 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:53.776027 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:53.776098 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:53.810226 3219848 cri.go:89] found id: ""
	I1217 12:02:53.810253 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.810262 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:53.810269 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:53.810333 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:53.839758 3219848 cri.go:89] found id: ""
	I1217 12:02:53.839778 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.839787 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:53.839793 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:53.839857 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:53.864680 3219848 cri.go:89] found id: ""
	I1217 12:02:53.864745 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.864768 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:53.864788 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:53.864872 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:53.888540 3219848 cri.go:89] found id: ""
	I1217 12:02:53.888561 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.888569 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:53.888576 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:53.888640 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:53.912908 3219848 cri.go:89] found id: ""
	I1217 12:02:53.912973 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.912998 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:53.913015 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:53.913087 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:53.942233 3219848 cri.go:89] found id: ""
	I1217 12:02:53.942254 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.942263 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:53.942285 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:53.942300 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:53.998450 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:53.998485 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:54.017836 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:54.017867 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:54.086072 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:54.077439    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.078327    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.079921    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.080399    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.082101    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:54.077439    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.078327    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.079921    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.080399    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.082101    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:54.086097 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:54.086110 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:54.112391 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:54.112586 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:56.648110 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:56.658791 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:56.658863 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:56.685484 3219848 cri.go:89] found id: ""
	I1217 12:02:56.685508 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.685516 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:56.685526 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:56.685587 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:56.710064 3219848 cri.go:89] found id: ""
	I1217 12:02:56.710126 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.710141 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:56.710148 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:56.710219 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:56.735357 3219848 cri.go:89] found id: ""
	I1217 12:02:56.735383 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.735393 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:56.735404 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:56.735465 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:56.767684 3219848 cri.go:89] found id: ""
	I1217 12:02:56.767710 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.767724 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:56.767731 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:56.767792 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:56.809924 3219848 cri.go:89] found id: ""
	I1217 12:02:56.809951 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.809960 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:56.809968 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:56.810026 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:56.839853 3219848 cri.go:89] found id: ""
	I1217 12:02:56.839879 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.839889 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:56.839895 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:56.839956 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:56.866637 3219848 cri.go:89] found id: ""
	I1217 12:02:56.866663 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.866672 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:56.866679 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:56.866746 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:56.891828 3219848 cri.go:89] found id: ""
	I1217 12:02:56.891853 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.891862 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:56.891872 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:56.891885 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:56.948612 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:56.948652 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:56.964832 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:56.964864 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:57.035706 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:57.026894    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.027527    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.029280    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.030006    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.031607    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:57.026894    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.027527    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.029280    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.030006    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.031607    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:57.035725 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:57.035783 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:57.061297 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:57.061332 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:59.592887 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:59.603568 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:59.603647 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:59.628351 3219848 cri.go:89] found id: ""
	I1217 12:02:59.628378 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.628387 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:59.628395 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:59.628503 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:59.654358 3219848 cri.go:89] found id: ""
	I1217 12:02:59.654380 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.654388 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:59.654394 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:59.654456 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:59.679684 3219848 cri.go:89] found id: ""
	I1217 12:02:59.679703 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.679717 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:59.679723 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:59.679786 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:59.706460 3219848 cri.go:89] found id: ""
	I1217 12:02:59.706491 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.706501 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:59.706507 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:59.706570 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:59.736016 3219848 cri.go:89] found id: ""
	I1217 12:02:59.736041 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.736050 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:59.736057 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:59.736116 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:59.778297 3219848 cri.go:89] found id: ""
	I1217 12:02:59.778323 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.778332 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:59.778339 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:59.778404 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:59.809983 3219848 cri.go:89] found id: ""
	I1217 12:02:59.810009 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.810018 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:59.810025 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:59.810082 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:59.843076 3219848 cri.go:89] found id: ""
	I1217 12:02:59.843102 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.843110 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:59.843119 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:59.843131 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:59.902975 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:59.903012 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:59.918923 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:59.918958 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:59.987681 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:59.979645    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.980298    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.981764    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.982249    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.983739    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:59.979645    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.980298    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.981764    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.982249    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.983739    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:59.987704 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:59.987716 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:00.126179 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:00.128746 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:02.747342 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:02.759443 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:02.759536 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:02.813879 3219848 cri.go:89] found id: ""
	I1217 12:03:02.813907 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.813917 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:02.813924 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:02.813996 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:02.856869 3219848 cri.go:89] found id: ""
	I1217 12:03:02.856899 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.856908 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:02.856915 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:02.856973 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:02.883984 3219848 cri.go:89] found id: ""
	I1217 12:03:02.884015 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.884024 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:02.884031 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:02.884094 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:02.911584 3219848 cri.go:89] found id: ""
	I1217 12:03:02.911605 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.911613 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:02.911619 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:02.911677 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:02.941815 3219848 cri.go:89] found id: ""
	I1217 12:03:02.941837 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.941847 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:02.941853 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:02.941920 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:02.971949 3219848 cri.go:89] found id: ""
	I1217 12:03:02.971972 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.971980 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:02.971986 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:02.972045 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:02.997848 3219848 cri.go:89] found id: ""
	I1217 12:03:02.997875 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.997884 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:02.997891 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:02.997952 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:03.025293 3219848 cri.go:89] found id: ""
	I1217 12:03:03.025321 3219848 logs.go:282] 0 containers: []
	W1217 12:03:03.025330 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:03.025339 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:03.025353 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:03.095479 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:03.086357    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.087966    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.088719    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.089902    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.090320    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:03.086357    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.087966    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.088719    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.089902    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.090320    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:03.095503 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:03.095517 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:03.121627 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:03.121668 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:03.152132 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:03.152162 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:03.208671 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:03.208717 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:05.726193 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:05.737765 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:05.737842 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:05.803315 3219848 cri.go:89] found id: ""
	I1217 12:03:05.803338 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.803355 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:05.803364 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:05.803424 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:05.852889 3219848 cri.go:89] found id: ""
	I1217 12:03:05.852952 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.852967 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:05.852975 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:05.853035 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:05.885239 3219848 cri.go:89] found id: ""
	I1217 12:03:05.885263 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.885274 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:05.885281 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:05.885346 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:05.909571 3219848 cri.go:89] found id: ""
	I1217 12:03:05.909601 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.909610 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:05.909617 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:05.909683 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:05.944648 3219848 cri.go:89] found id: ""
	I1217 12:03:05.944714 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.944729 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:05.944742 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:05.944801 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:05.969671 3219848 cri.go:89] found id: ""
	I1217 12:03:05.969707 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.969716 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:05.969738 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:05.969819 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:05.994549 3219848 cri.go:89] found id: ""
	I1217 12:03:05.994575 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.994584 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:05.994590 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:05.994648 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:06.025175 3219848 cri.go:89] found id: ""
	I1217 12:03:06.025201 3219848 logs.go:282] 0 containers: []
	W1217 12:03:06.025212 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:06.025223 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:06.025255 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:06.094463 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:06.085807    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.086594    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.088396    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.089018    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.090252    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:06.085807    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.086594    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.088396    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.089018    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.090252    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:06.094488 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:06.094503 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:06.120857 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:06.120892 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:06.148825 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:06.148854 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:06.207501 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:06.207537 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:08.724013 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:08.734763 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:08.734854 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:08.797461 3219848 cri.go:89] found id: ""
	I1217 12:03:08.797536 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.797561 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:08.797583 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:08.797692 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:08.849950 3219848 cri.go:89] found id: ""
	I1217 12:03:08.850015 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.850031 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:08.850039 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:08.850099 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:08.876353 3219848 cri.go:89] found id: ""
	I1217 12:03:08.876378 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.876387 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:08.876394 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:08.876474 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:08.902743 3219848 cri.go:89] found id: ""
	I1217 12:03:08.902767 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.902776 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:08.902783 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:08.902847 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:08.928380 3219848 cri.go:89] found id: ""
	I1217 12:03:08.928405 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.928439 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:08.928447 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:08.928508 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:08.953372 3219848 cri.go:89] found id: ""
	I1217 12:03:08.953397 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.953406 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:08.953413 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:08.953481 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:08.977913 3219848 cri.go:89] found id: ""
	I1217 12:03:08.977935 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.977945 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:08.977951 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:08.978015 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:09.014088 3219848 cri.go:89] found id: ""
	I1217 12:03:09.014114 3219848 logs.go:282] 0 containers: []
	W1217 12:03:09.014123 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:09.014133 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:09.014144 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:09.069559 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:09.069599 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:09.085849 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:09.085877 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:09.153859 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:09.145727    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.146529    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.148157    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.148779    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.150028    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:09.145727    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.146529    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.148157    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.148779    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.150028    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:09.153879 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:09.153892 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:09.179067 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:09.179099 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:11.708448 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:11.719221 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:11.719291 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:11.744009 3219848 cri.go:89] found id: ""
	I1217 12:03:11.744033 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.744042 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:11.744048 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:11.744104 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:11.795640 3219848 cri.go:89] found id: ""
	I1217 12:03:11.795663 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.795671 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:11.795678 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:11.795739 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:11.851553 3219848 cri.go:89] found id: ""
	I1217 12:03:11.851573 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.851581 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:11.851587 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:11.851642 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:11.879197 3219848 cri.go:89] found id: ""
	I1217 12:03:11.879272 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.879294 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:11.879316 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:11.879432 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:11.904743 3219848 cri.go:89] found id: ""
	I1217 12:03:11.904816 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.904839 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:11.904864 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:11.904974 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:11.930378 3219848 cri.go:89] found id: ""
	I1217 12:03:11.930452 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.930482 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:11.930491 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:11.930562 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:11.955446 3219848 cri.go:89] found id: ""
	I1217 12:03:11.955475 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.955485 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:11.955492 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:11.955553 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:11.980056 3219848 cri.go:89] found id: ""
	I1217 12:03:11.980082 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.980092 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:11.980102 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:11.980113 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:12.039392 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:12.039430 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:12.055724 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:12.055752 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:12.120835 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:12.111964    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.112751    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.114462    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.114770    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.116985    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:12.111964    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.112751    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.114462    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.114770    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.116985    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:12.120858 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:12.120871 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:12.145568 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:12.145601 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:14.685252 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:14.695909 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:14.695982 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:14.722094 3219848 cri.go:89] found id: ""
	I1217 12:03:14.722116 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.722124 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:14.722131 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:14.722191 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:14.747765 3219848 cri.go:89] found id: ""
	I1217 12:03:14.747790 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.747799 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:14.747805 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:14.747863 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:14.832061 3219848 cri.go:89] found id: ""
	I1217 12:03:14.832086 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.832096 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:14.832103 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:14.832175 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:14.861589 3219848 cri.go:89] found id: ""
	I1217 12:03:14.861612 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.861621 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:14.861628 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:14.861687 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:14.887122 3219848 cri.go:89] found id: ""
	I1217 12:03:14.887144 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.887153 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:14.887160 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:14.887219 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:14.913961 3219848 cri.go:89] found id: ""
	I1217 12:03:14.913988 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.913996 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:14.914003 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:14.914063 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:14.940509 3219848 cri.go:89] found id: ""
	I1217 12:03:14.940539 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.940584 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:14.940599 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:14.940684 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:14.968190 3219848 cri.go:89] found id: ""
	I1217 12:03:14.968260 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.968286 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:14.968314 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:14.968341 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:15.025687 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:15.025728 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:15.048063 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:15.048161 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:15.120549 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:15.111260    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.111932    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.113791    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.114487    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.116204    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:15.111260    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.111932    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.113791    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.114487    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.116204    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:15.120575 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:15.120590 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:15.147374 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:15.147419 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:17.678613 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:17.689902 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:17.689996 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:17.715580 3219848 cri.go:89] found id: ""
	I1217 12:03:17.715617 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.715626 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:17.715634 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:17.715706 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:17.746656 3219848 cri.go:89] found id: ""
	I1217 12:03:17.746680 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.746689 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:17.746696 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:17.746757 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:17.777911 3219848 cri.go:89] found id: ""
	I1217 12:03:17.777981 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.778005 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:17.778031 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:17.778142 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:17.841621 3219848 cri.go:89] found id: ""
	I1217 12:03:17.841682 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.841714 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:17.841734 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:17.841839 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:17.874462 3219848 cri.go:89] found id: ""
	I1217 12:03:17.874536 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.874559 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:17.874573 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:17.874655 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:17.899519 3219848 cri.go:89] found id: ""
	I1217 12:03:17.899563 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.899573 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:17.899580 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:17.899654 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:17.925535 3219848 cri.go:89] found id: ""
	I1217 12:03:17.925559 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.925568 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:17.925574 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:17.925642 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:17.950672 3219848 cri.go:89] found id: ""
	I1217 12:03:17.950737 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.950761 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:17.950787 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:17.950826 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:18.006915 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:18.006964 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:18.024598 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:18.024632 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:18.093800 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:18.085487    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.086439    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.087176    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.088142    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.089680    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:18.085487    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.086439    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.087176    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.088142    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.089680    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:18.093830 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:18.093843 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:18.120115 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:18.120150 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:20.651699 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:20.662809 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:20.662885 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:20.692750 3219848 cri.go:89] found id: ""
	I1217 12:03:20.692772 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.692781 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:20.692787 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:20.692854 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:20.723234 3219848 cri.go:89] found id: ""
	I1217 12:03:20.723259 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.723267 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:20.723273 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:20.723334 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:20.749812 3219848 cri.go:89] found id: ""
	I1217 12:03:20.749833 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.749841 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:20.749847 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:20.749903 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:20.799186 3219848 cri.go:89] found id: ""
	I1217 12:03:20.799208 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.799216 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:20.799222 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:20.799280 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:20.850498 3219848 cri.go:89] found id: ""
	I1217 12:03:20.850573 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.850596 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:20.850617 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:20.850735 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:20.881588 3219848 cri.go:89] found id: ""
	I1217 12:03:20.881660 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.881682 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:20.881702 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:20.881790 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:20.911209 3219848 cri.go:89] found id: ""
	I1217 12:03:20.911275 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.911301 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:20.911316 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:20.911391 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:20.938447 3219848 cri.go:89] found id: ""
	I1217 12:03:20.938473 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.938483 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:20.938492 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:20.938503 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:20.995421 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:20.995463 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:21.013450 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:21.013483 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:21.084404 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:21.075746    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.076533    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.078205    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.078900    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.080479    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:21.075746    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.076533    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.078205    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.078900    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.080479    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:21.084449 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:21.084463 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:21.111296 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:21.111335 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:23.647949 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:23.658668 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:23.658737 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:23.685275 3219848 cri.go:89] found id: ""
	I1217 12:03:23.685298 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.685307 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:23.685314 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:23.685375 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:23.711416 3219848 cri.go:89] found id: ""
	I1217 12:03:23.711466 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.711478 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:23.711485 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:23.711549 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:23.738391 3219848 cri.go:89] found id: ""
	I1217 12:03:23.738418 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.738427 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:23.738433 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:23.738492 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:23.801227 3219848 cri.go:89] found id: ""
	I1217 12:03:23.801253 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.801262 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:23.801268 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:23.801327 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:23.837564 3219848 cri.go:89] found id: ""
	I1217 12:03:23.837585 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.837593 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:23.837600 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:23.837660 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:23.864057 3219848 cri.go:89] found id: ""
	I1217 12:03:23.864078 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.864086 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:23.864093 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:23.864159 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:23.888263 3219848 cri.go:89] found id: ""
	I1217 12:03:23.888289 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.888298 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:23.888305 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:23.888363 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:23.917533 3219848 cri.go:89] found id: ""
	I1217 12:03:23.917555 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.917564 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:23.917573 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:23.917584 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:23.946496 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:23.946525 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:24.003650 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:24.003697 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:24.022449 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:24.022482 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:24.093823 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:24.084998    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.085736    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.087440    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.088190    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.089867    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:24.084998    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.085736    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.087440    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.088190    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.089867    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:24.093845 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:24.093858 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:26.622844 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:26.634100 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:26.634173 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:26.662315 3219848 cri.go:89] found id: ""
	I1217 12:03:26.662341 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.662350 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:26.662357 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:26.662417 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:26.689598 3219848 cri.go:89] found id: ""
	I1217 12:03:26.689623 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.689633 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:26.689640 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:26.689704 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:26.716815 3219848 cri.go:89] found id: ""
	I1217 12:03:26.716841 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.716850 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:26.716858 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:26.716926 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:26.743338 3219848 cri.go:89] found id: ""
	I1217 12:03:26.743364 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.743375 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:26.743382 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:26.743447 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:26.799290 3219848 cri.go:89] found id: ""
	I1217 12:03:26.799326 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.799335 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:26.799342 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:26.799412 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:26.854473 3219848 cri.go:89] found id: ""
	I1217 12:03:26.854539 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.854555 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:26.854563 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:26.854625 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:26.880552 3219848 cri.go:89] found id: ""
	I1217 12:03:26.880581 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.880591 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:26.880598 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:26.880659 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:26.906009 3219848 cri.go:89] found id: ""
	I1217 12:03:26.906042 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.906052 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:26.906061 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:26.906072 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:26.971795 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:26.963736    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.964328    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.965821    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.966197    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.967699    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:26.963736    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.964328    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.965821    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.966197    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.967699    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:26.971818 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:26.971831 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:26.996929 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:26.996968 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:27.031442 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:27.031479 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:27.088296 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:27.088330 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:29.604978 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:29.615685 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:29.615754 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:29.642346 3219848 cri.go:89] found id: ""
	I1217 12:03:29.642375 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.642384 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:29.642391 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:29.642449 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:29.669188 3219848 cri.go:89] found id: ""
	I1217 12:03:29.669214 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.669223 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:29.669230 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:29.669293 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:29.695623 3219848 cri.go:89] found id: ""
	I1217 12:03:29.695648 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.695657 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:29.695663 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:29.695729 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:29.721447 3219848 cri.go:89] found id: ""
	I1217 12:03:29.721472 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.721482 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:29.721489 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:29.721551 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:29.746217 3219848 cri.go:89] found id: ""
	I1217 12:03:29.746244 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.746253 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:29.746261 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:29.746318 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:29.797088 3219848 cri.go:89] found id: ""
	I1217 12:03:29.797122 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.797131 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:29.797137 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:29.797210 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:29.845942 3219848 cri.go:89] found id: ""
	I1217 12:03:29.845962 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.845971 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:29.845977 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:29.846041 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:29.881686 3219848 cri.go:89] found id: ""
	I1217 12:03:29.881714 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.881723 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:29.881733 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:29.881745 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:29.938916 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:29.938949 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:29.954625 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:29.954702 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:30.048700 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:30.033826    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.034802    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.036344    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.036964    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.039023    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:30.033826    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.034802    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.036344    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.036964    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.039023    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:30.048776 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:30.048805 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:30.081544 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:30.081588 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:32.617502 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:32.628255 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:32.628328 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:32.653287 3219848 cri.go:89] found id: ""
	I1217 12:03:32.653314 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.653323 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:32.653331 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:32.653393 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:32.678914 3219848 cri.go:89] found id: ""
	I1217 12:03:32.678938 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.678946 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:32.678952 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:32.679013 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:32.705809 3219848 cri.go:89] found id: ""
	I1217 12:03:32.705835 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.705845 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:32.705852 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:32.705915 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:32.736249 3219848 cri.go:89] found id: ""
	I1217 12:03:32.736278 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.736294 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:32.736301 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:32.736382 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:32.777637 3219848 cri.go:89] found id: ""
	I1217 12:03:32.777666 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.777676 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:32.777684 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:32.777749 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:32.848686 3219848 cri.go:89] found id: ""
	I1217 12:03:32.848726 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.848735 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:32.848742 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:32.848811 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:32.877608 3219848 cri.go:89] found id: ""
	I1217 12:03:32.877633 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.877643 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:32.877650 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:32.877715 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:32.912387 3219848 cri.go:89] found id: ""
	I1217 12:03:32.912443 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.912453 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:32.912463 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:32.912478 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:32.973780 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:32.965664    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.966474    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.968080    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.968441    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.969916    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:32.965664    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.966474    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.968080    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.968441    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.969916    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:32.973802 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:32.973816 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:32.999779 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:32.999818 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:33.035424 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:33.035456 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:33.095096 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:33.095136 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:35.611791 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:35.625472 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:35.625546 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:35.656243 3219848 cri.go:89] found id: ""
	I1217 12:03:35.656265 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.656273 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:35.656280 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:35.656339 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:35.681938 3219848 cri.go:89] found id: ""
	I1217 12:03:35.681964 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.681972 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:35.681978 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:35.682038 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:35.711864 3219848 cri.go:89] found id: ""
	I1217 12:03:35.711887 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.711896 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:35.711902 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:35.711961 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:35.736900 3219848 cri.go:89] found id: ""
	I1217 12:03:35.736924 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.736932 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:35.736942 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:35.737002 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:35.796476 3219848 cri.go:89] found id: ""
	I1217 12:03:35.796553 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.796576 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:35.796598 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:35.796711 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:35.851385 3219848 cri.go:89] found id: ""
	I1217 12:03:35.851463 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.851487 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:35.851530 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:35.851627 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:35.879315 3219848 cri.go:89] found id: ""
	I1217 12:03:35.879388 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.879423 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:35.879447 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:35.879560 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:35.904369 3219848 cri.go:89] found id: ""
	I1217 12:03:35.904461 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.904485 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:35.904509 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:35.904539 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:35.962316 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:35.962358 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:35.978473 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:35.978503 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:36.048228 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:36.039946    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.040655    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.042240    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.042853    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.043967    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:36.039946    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.040655    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.042240    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.042853    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.043967    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:36.048254 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:36.048267 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:36.075099 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:36.075134 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:38.607418 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:38.618789 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:38.618869 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:38.646270 3219848 cri.go:89] found id: ""
	I1217 12:03:38.646297 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.646307 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:38.646315 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:38.646379 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:38.671906 3219848 cri.go:89] found id: ""
	I1217 12:03:38.671931 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.671940 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:38.671947 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:38.672012 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:38.696480 3219848 cri.go:89] found id: ""
	I1217 12:03:38.696504 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.696513 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:38.696520 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:38.696581 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:38.727000 3219848 cri.go:89] found id: ""
	I1217 12:03:38.727026 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.727036 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:38.727042 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:38.727114 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:38.782353 3219848 cri.go:89] found id: ""
	I1217 12:03:38.782381 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.782391 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:38.782398 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:38.782459 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:38.847087 3219848 cri.go:89] found id: ""
	I1217 12:03:38.847110 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.847118 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:38.847125 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:38.847183 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:38.874682 3219848 cri.go:89] found id: ""
	I1217 12:03:38.874704 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.874712 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:38.874718 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:38.874780 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:38.902269 3219848 cri.go:89] found id: ""
	I1217 12:03:38.902297 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.902306 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:38.902316 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:38.902331 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:38.967646 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:38.958671    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.959248    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.961005    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.961508    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.963014    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:38.958671    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.959248    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.961005    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.961508    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.963014    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:38.967671 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:38.967685 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:38.993086 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:38.993121 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:39.024046 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:39.024079 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:39.080928 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:39.080962 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:41.597202 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:41.608508 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:41.608582 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:41.634319 3219848 cri.go:89] found id: ""
	I1217 12:03:41.634344 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.634359 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:41.634366 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:41.634427 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:41.660053 3219848 cri.go:89] found id: ""
	I1217 12:03:41.660076 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.660085 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:41.660092 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:41.660159 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:41.686022 3219848 cri.go:89] found id: ""
	I1217 12:03:41.686047 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.686056 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:41.686062 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:41.686119 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:41.711689 3219848 cri.go:89] found id: ""
	I1217 12:03:41.711714 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.711723 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:41.711729 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:41.711798 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:41.738135 3219848 cri.go:89] found id: ""
	I1217 12:03:41.738161 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.738170 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:41.738177 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:41.738235 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:41.794953 3219848 cri.go:89] found id: ""
	I1217 12:03:41.794975 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.794984 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:41.794991 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:41.795051 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:41.832712 3219848 cri.go:89] found id: ""
	I1217 12:03:41.832747 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.832755 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:41.832762 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:41.832872 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:41.862947 3219848 cri.go:89] found id: ""
	I1217 12:03:41.862967 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.862976 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:41.862985 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:41.862996 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:41.888484 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:41.888519 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:41.919432 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:41.919461 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:41.979083 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:41.979117 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:41.995225 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:41.995256 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:42.068500 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:42.058178    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.059172    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.061060    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.061946    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.063848    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:42.058178    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.059172    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.061060    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.061946    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.063848    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:44.569152 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:44.579717 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:44.579791 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:44.604579 3219848 cri.go:89] found id: ""
	I1217 12:03:44.604605 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.604614 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:44.604621 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:44.604680 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:44.628954 3219848 cri.go:89] found id: ""
	I1217 12:03:44.628987 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.628997 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:44.629004 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:44.629066 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:44.657345 3219848 cri.go:89] found id: ""
	I1217 12:03:44.657372 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.657381 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:44.657388 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:44.657445 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:44.682960 3219848 cri.go:89] found id: ""
	I1217 12:03:44.682983 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.683000 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:44.683007 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:44.683066 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:44.712406 3219848 cri.go:89] found id: ""
	I1217 12:03:44.712451 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.712461 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:44.712468 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:44.712526 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:44.737929 3219848 cri.go:89] found id: ""
	I1217 12:03:44.737952 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.737961 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:44.737967 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:44.738027 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:44.778893 3219848 cri.go:89] found id: ""
	I1217 12:03:44.778921 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.778930 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:44.778938 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:44.779003 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:44.818695 3219848 cri.go:89] found id: ""
	I1217 12:03:44.818724 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.818733 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:44.818742 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:44.818754 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:44.888711 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:44.888748 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:44.905193 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:44.905224 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:44.969126 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:44.960653    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.961469    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.963160    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.963503    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.964997    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:44.960653    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.961469    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.963160    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.963503    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.964997    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:44.969149 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:44.969162 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:44.995233 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:44.995272 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:47.580853 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:47.591106 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:47.591173 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:47.616262 3219848 cri.go:89] found id: ""
	I1217 12:03:47.616294 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.616304 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:47.616317 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:47.616384 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:47.641674 3219848 cri.go:89] found id: ""
	I1217 12:03:47.641702 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.641712 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:47.641718 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:47.641778 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:47.667191 3219848 cri.go:89] found id: ""
	I1217 12:03:47.667215 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.667224 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:47.667230 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:47.667296 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:47.696304 3219848 cri.go:89] found id: ""
	I1217 12:03:47.696332 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.696341 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:47.696349 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:47.696412 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:47.726109 3219848 cri.go:89] found id: ""
	I1217 12:03:47.726134 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.726143 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:47.726149 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:47.726212 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:47.762878 3219848 cri.go:89] found id: ""
	I1217 12:03:47.762904 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.762914 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:47.762920 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:47.762977 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:47.824894 3219848 cri.go:89] found id: ""
	I1217 12:03:47.824932 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.824957 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:47.824973 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:47.825056 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:47.851816 3219848 cri.go:89] found id: ""
	I1217 12:03:47.851852 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.851861 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:47.851888 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:47.851907 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:47.908314 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:47.908352 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:47.924222 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:47.924250 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:47.986251 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:47.978126    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.978646    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.980334    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.980816    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.982319    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:47.978126    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.978646    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.980334    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.980816    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.982319    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:47.986276 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:47.986290 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:48.010815 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:48.010855 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:50.542164 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:50.553364 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:50.553437 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:50.581389 3219848 cri.go:89] found id: ""
	I1217 12:03:50.581423 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.581432 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:50.581439 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:50.581508 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:50.610382 3219848 cri.go:89] found id: ""
	I1217 12:03:50.610405 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.610413 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:50.610422 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:50.610482 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:50.636111 3219848 cri.go:89] found id: ""
	I1217 12:03:50.636137 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.636147 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:50.636153 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:50.636218 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:50.661308 3219848 cri.go:89] found id: ""
	I1217 12:03:50.661334 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.661342 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:50.661350 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:50.661415 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:50.688144 3219848 cri.go:89] found id: ""
	I1217 12:03:50.688172 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.688181 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:50.688187 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:50.688251 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:50.715059 3219848 cri.go:89] found id: ""
	I1217 12:03:50.715087 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.715096 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:50.715103 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:50.715165 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:50.745229 3219848 cri.go:89] found id: ""
	I1217 12:03:50.745253 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.745262 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:50.745269 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:50.745330 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:50.793705 3219848 cri.go:89] found id: ""
	I1217 12:03:50.793735 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.793743 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:50.793752 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:50.793763 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:50.876190 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:50.876229 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:50.893552 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:50.893581 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:50.960907 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:50.951439    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.952410    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.954030    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.954408    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.956833    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:50.951439    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.952410    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.954030    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.954408    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.956833    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:50.960928 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:50.960942 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:50.986454 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:50.986485 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:53.522123 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:53.533167 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:53.533246 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:53.558553 3219848 cri.go:89] found id: ""
	I1217 12:03:53.558580 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.558589 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:53.558596 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:53.558668 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:53.586267 3219848 cri.go:89] found id: ""
	I1217 12:03:53.586295 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.586305 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:53.586318 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:53.586383 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:53.613148 3219848 cri.go:89] found id: ""
	I1217 12:03:53.613174 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.613183 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:53.613190 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:53.613251 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:53.639336 3219848 cri.go:89] found id: ""
	I1217 12:03:53.639371 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.639381 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:53.639387 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:53.639452 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:53.664632 3219848 cri.go:89] found id: ""
	I1217 12:03:53.664700 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.664730 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:53.664745 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:53.664820 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:53.689663 3219848 cri.go:89] found id: ""
	I1217 12:03:53.689733 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.689760 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:53.689774 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:53.689851 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:53.714636 3219848 cri.go:89] found id: ""
	I1217 12:03:53.714707 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.714733 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:53.714747 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:53.714827 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:53.744583 3219848 cri.go:89] found id: ""
	I1217 12:03:53.744610 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.744620 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:53.744629 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:53.744640 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:53.833845 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:53.833884 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:53.853606 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:53.853632 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:53.921245 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:53.912685    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.913171    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.914992    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.915543    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.917157    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:53.912685    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.913171    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.914992    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.915543    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.917157    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:53.921269 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:53.921282 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:53.946578 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:53.946611 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:56.477034 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:56.488539 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:56.488622 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:56.514320 3219848 cri.go:89] found id: ""
	I1217 12:03:56.514347 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.514356 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:56.514363 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:56.514426 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:56.540629 3219848 cri.go:89] found id: ""
	I1217 12:03:56.540668 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.540676 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:56.540687 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:56.540752 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:56.571552 3219848 cri.go:89] found id: ""
	I1217 12:03:56.571586 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.571595 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:56.571602 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:56.571725 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:56.598758 3219848 cri.go:89] found id: ""
	I1217 12:03:56.598835 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.598858 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:56.598878 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:56.598964 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:56.624629 3219848 cri.go:89] found id: ""
	I1217 12:03:56.624659 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.624668 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:56.624675 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:56.624736 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:56.650192 3219848 cri.go:89] found id: ""
	I1217 12:03:56.650214 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.650222 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:56.650229 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:56.650286 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:56.675523 3219848 cri.go:89] found id: ""
	I1217 12:03:56.675548 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.675557 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:56.675563 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:56.675651 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:56.701703 3219848 cri.go:89] found id: ""
	I1217 12:03:56.701731 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.701740 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:56.701751 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:56.701762 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:56.717844 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:56.717877 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:56.837097 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:56.823600    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.824395    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.827077    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.827764    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.829468    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:56.823600    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.824395    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.827077    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.827764    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.829468    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:56.837160 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:56.837195 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:56.864759 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:56.864792 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:56.892589 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:56.892615 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:59.450097 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:59.460573 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:59.460649 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:59.484966 3219848 cri.go:89] found id: ""
	I1217 12:03:59.484992 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.485001 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:59.485007 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:59.485073 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:59.509519 3219848 cri.go:89] found id: ""
	I1217 12:03:59.509545 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.509554 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:59.509561 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:59.509619 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:59.535238 3219848 cri.go:89] found id: ""
	I1217 12:03:59.535307 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.535331 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:59.535351 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:59.535443 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:59.561799 3219848 cri.go:89] found id: ""
	I1217 12:03:59.561823 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.561832 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:59.561839 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:59.561898 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:59.587394 3219848 cri.go:89] found id: ""
	I1217 12:03:59.587416 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.587425 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:59.587431 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:59.587489 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:59.614672 3219848 cri.go:89] found id: ""
	I1217 12:03:59.614695 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.614704 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:59.614712 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:59.614774 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:59.641144 3219848 cri.go:89] found id: ""
	I1217 12:03:59.641171 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.641180 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:59.641187 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:59.641251 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:59.667139 3219848 cri.go:89] found id: ""
	I1217 12:03:59.667167 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.667176 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:59.667184 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:59.667196 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:59.725056 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:59.725091 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:59.741510 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:59.741593 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:59.858554 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:59.849895    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.850546    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.852238    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.852841    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.854544    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:59.849895    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.850546    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.852238    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.852841    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.854544    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:59.858578 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:59.858592 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:59.884457 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:59.884492 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:02.413040 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:02.426774 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:02.426848 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:02.456484 3219848 cri.go:89] found id: ""
	I1217 12:04:02.456587 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.456601 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:02.456609 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:02.456706 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:02.485434 3219848 cri.go:89] found id: ""
	I1217 12:04:02.485506 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.485531 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:02.485547 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:02.485622 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:02.512063 3219848 cri.go:89] found id: ""
	I1217 12:04:02.512100 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.512109 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:02.512116 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:02.512195 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:02.538362 3219848 cri.go:89] found id: ""
	I1217 12:04:02.538433 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.538454 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:02.538462 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:02.538525 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:02.567959 3219848 cri.go:89] found id: ""
	I1217 12:04:02.567994 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.568003 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:02.568009 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:02.568077 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:02.594823 3219848 cri.go:89] found id: ""
	I1217 12:04:02.594860 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.594869 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:02.594876 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:02.594950 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:02.625125 3219848 cri.go:89] found id: ""
	I1217 12:04:02.625196 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.625211 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:02.625219 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:02.625282 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:02.650998 3219848 cri.go:89] found id: ""
	I1217 12:04:02.651033 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.651042 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:02.651051 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:02.651062 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:02.676950 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:02.676984 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:02.711118 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:02.711144 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:02.774152 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:02.774233 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:02.794787 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:02.794862 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:02.886703 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:02.878272    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.878713    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.880270    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.880830    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.882492    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:02.878272    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.878713    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.880270    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.880830    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.882492    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:05.386993 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:05.398225 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:05.398299 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:05.426294 3219848 cri.go:89] found id: ""
	I1217 12:04:05.426321 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.426330 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:05.426337 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:05.426399 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:05.451004 3219848 cri.go:89] found id: ""
	I1217 12:04:05.451027 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.451036 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:05.451049 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:05.451112 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:05.476504 3219848 cri.go:89] found id: ""
	I1217 12:04:05.476532 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.476542 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:05.476549 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:05.476607 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:05.506001 3219848 cri.go:89] found id: ""
	I1217 12:04:05.506028 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.506036 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:05.506043 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:05.506103 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:05.531776 3219848 cri.go:89] found id: ""
	I1217 12:04:05.531803 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.531813 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:05.531820 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:05.531878 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:05.558040 3219848 cri.go:89] found id: ""
	I1217 12:04:05.558068 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.558078 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:05.558085 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:05.558149 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:05.582988 3219848 cri.go:89] found id: ""
	I1217 12:04:05.583024 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.583033 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:05.583040 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:05.583115 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:05.609687 3219848 cri.go:89] found id: ""
	I1217 12:04:05.609725 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.609734 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:05.609744 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:05.609756 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:05.677594 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:05.668798    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.669411    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.671028    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.671605    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.673145    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:05.668798    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.669411    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.671028    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.671605    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.673145    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:05.677661 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:05.677689 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:05.704024 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:05.704062 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:05.736880 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:05.736906 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:05.810417 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:05.810457 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:08.343493 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:08.353931 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:08.354001 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:08.377982 3219848 cri.go:89] found id: ""
	I1217 12:04:08.378050 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.378062 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:08.378069 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:08.378160 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:08.402837 3219848 cri.go:89] found id: ""
	I1217 12:04:08.402870 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.402880 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:08.402886 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:08.402956 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:08.430641 3219848 cri.go:89] found id: ""
	I1217 12:04:08.430666 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.430675 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:08.430682 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:08.430747 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:08.455904 3219848 cri.go:89] found id: ""
	I1217 12:04:08.455937 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.455947 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:08.455954 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:08.456020 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:08.480357 3219848 cri.go:89] found id: ""
	I1217 12:04:08.480388 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.480398 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:08.480405 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:08.480506 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:08.505595 3219848 cri.go:89] found id: ""
	I1217 12:04:08.505629 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.505682 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:08.505701 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:08.505765 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:08.531028 3219848 cri.go:89] found id: ""
	I1217 12:04:08.531065 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.531074 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:08.531081 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:08.531156 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:08.559015 3219848 cri.go:89] found id: ""
	I1217 12:04:08.559051 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.559060 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:08.559069 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:08.559081 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:08.574853 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:08.574883 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:08.640119 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:08.631556    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.632320    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.634049    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.634630    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.635699    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:08.631556    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.632320    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.634049    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.634630    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.635699    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:08.640141 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:08.640154 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:08.666054 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:08.666091 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:08.694523 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:08.694553 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:11.260393 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:11.271847 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:11.271939 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:11.297537 3219848 cri.go:89] found id: ""
	I1217 12:04:11.297559 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.297568 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:11.297574 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:11.297669 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:11.326252 3219848 cri.go:89] found id: ""
	I1217 12:04:11.326279 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.326288 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:11.326295 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:11.326354 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:11.354965 3219848 cri.go:89] found id: ""
	I1217 12:04:11.354991 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.355013 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:11.355020 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:11.355085 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:11.379623 3219848 cri.go:89] found id: ""
	I1217 12:04:11.379649 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.379657 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:11.379664 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:11.379730 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:11.405089 3219848 cri.go:89] found id: ""
	I1217 12:04:11.405157 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.405185 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:11.405200 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:11.405276 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:11.431039 3219848 cri.go:89] found id: ""
	I1217 12:04:11.431064 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.431073 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:11.431079 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:11.431138 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:11.456294 3219848 cri.go:89] found id: ""
	I1217 12:04:11.456329 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.456338 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:11.456345 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:11.456437 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:11.485568 3219848 cri.go:89] found id: ""
	I1217 12:04:11.485595 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.485604 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:11.485613 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:11.485628 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:11.542231 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:11.542268 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:11.559119 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:11.559201 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:11.628507 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:11.619906    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.620667    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.622406    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.622904    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.624511    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:11.619906    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.620667    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.622406    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.622904    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.624511    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:11.628580 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:11.628617 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:11.654658 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:11.654692 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:14.187317 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:14.200950 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:14.201028 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:14.225871 3219848 cri.go:89] found id: ""
	I1217 12:04:14.225907 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.225917 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:14.225924 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:14.225982 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:14.255169 3219848 cri.go:89] found id: ""
	I1217 12:04:14.255194 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.255203 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:14.255210 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:14.255270 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:14.279884 3219848 cri.go:89] found id: ""
	I1217 12:04:14.279914 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.279928 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:14.279935 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:14.279993 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:14.303876 3219848 cri.go:89] found id: ""
	I1217 12:04:14.303902 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.303911 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:14.303918 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:14.303982 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:14.329876 3219848 cri.go:89] found id: ""
	I1217 12:04:14.329902 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.329911 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:14.329924 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:14.329993 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:14.355681 3219848 cri.go:89] found id: ""
	I1217 12:04:14.355707 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.355723 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:14.355730 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:14.355791 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:14.380557 3219848 cri.go:89] found id: ""
	I1217 12:04:14.380582 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.380591 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:14.380607 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:14.380669 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:14.406559 3219848 cri.go:89] found id: ""
	I1217 12:04:14.406626 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.406652 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:14.406671 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:14.406684 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:14.435535 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:14.435567 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:14.496057 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:14.496100 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:14.512036 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:14.512068 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:14.581215 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:14.571459    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.572243    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.574240    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.574925    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.576493    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:14.571459    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.572243    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.574240    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.574925    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.576493    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:14.581280 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:14.581299 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:17.108603 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:17.119638 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:17.119710 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:17.144879 3219848 cri.go:89] found id: ""
	I1217 12:04:17.144901 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.144909 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:17.144915 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:17.144976 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:17.169341 3219848 cri.go:89] found id: ""
	I1217 12:04:17.169366 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.169375 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:17.169381 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:17.169440 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:17.193770 3219848 cri.go:89] found id: ""
	I1217 12:04:17.193792 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.193800 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:17.193806 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:17.193867 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:17.218766 3219848 cri.go:89] found id: ""
	I1217 12:04:17.218788 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.218797 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:17.218804 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:17.218911 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:17.246745 3219848 cri.go:89] found id: ""
	I1217 12:04:17.246768 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.246777 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:17.246783 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:17.246844 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:17.271877 3219848 cri.go:89] found id: ""
	I1217 12:04:17.271898 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.271907 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:17.271914 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:17.271971 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:17.296098 3219848 cri.go:89] found id: ""
	I1217 12:04:17.296124 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.296133 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:17.296140 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:17.296202 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:17.321740 3219848 cri.go:89] found id: ""
	I1217 12:04:17.321767 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.321777 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:17.321788 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:17.321799 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:17.378911 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:17.378944 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:17.395425 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:17.395454 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:17.458148 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:17.450570    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.450926    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.452495    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.452908    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.454301    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:17.450570    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.450926    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.452495    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.452908    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.454301    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:17.458172 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:17.458185 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:17.483130 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:17.483199 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:20.011622 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:20.036129 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:20.036210 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:20.069785 3219848 cri.go:89] found id: ""
	I1217 12:04:20.069812 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.069820 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:20.069826 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:20.069891 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:20.118138 3219848 cri.go:89] found id: ""
	I1217 12:04:20.118165 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.118174 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:20.118180 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:20.118287 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:20.145219 3219848 cri.go:89] found id: ""
	I1217 12:04:20.145246 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.145267 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:20.145274 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:20.145340 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:20.171515 3219848 cri.go:89] found id: ""
	I1217 12:04:20.171541 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.171549 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:20.171556 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:20.171615 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:20.198371 3219848 cri.go:89] found id: ""
	I1217 12:04:20.198393 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.198409 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:20.198416 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:20.198476 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:20.226505 3219848 cri.go:89] found id: ""
	I1217 12:04:20.226529 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.226538 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:20.226544 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:20.226604 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:20.251848 3219848 cri.go:89] found id: ""
	I1217 12:04:20.251874 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.251883 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:20.251890 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:20.251951 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:20.281838 3219848 cri.go:89] found id: ""
	I1217 12:04:20.281863 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.281872 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:20.281887 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:20.281899 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:20.344875 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:20.336196    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.336887    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.338603    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.339150    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.340924    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:20.336196    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.336887    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.338603    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.339150    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.340924    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:20.344897 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:20.344909 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:20.370205 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:20.370244 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:20.403171 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:20.403203 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:20.459306 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:20.459342 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:22.976954 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:22.987706 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:22.987785 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:23.048240 3219848 cri.go:89] found id: ""
	I1217 12:04:23.048267 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.048276 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:23.048282 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:23.048342 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:23.098972 3219848 cri.go:89] found id: ""
	I1217 12:04:23.099001 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.099041 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:23.099055 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:23.099142 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:23.130170 3219848 cri.go:89] found id: ""
	I1217 12:04:23.130192 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.130201 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:23.130207 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:23.130266 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:23.157897 3219848 cri.go:89] found id: ""
	I1217 12:04:23.157919 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.157927 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:23.157933 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:23.157990 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:23.186732 3219848 cri.go:89] found id: ""
	I1217 12:04:23.186757 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.186766 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:23.186772 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:23.186834 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:23.211252 3219848 cri.go:89] found id: ""
	I1217 12:04:23.211278 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.211287 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:23.211294 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:23.211360 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:23.235484 3219848 cri.go:89] found id: ""
	I1217 12:04:23.235507 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.235516 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:23.235523 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:23.235593 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:23.263167 3219848 cri.go:89] found id: ""
	I1217 12:04:23.263195 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.263204 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:23.263213 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:23.263224 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:23.319468 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:23.319503 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:23.335277 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:23.335309 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:23.401412 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:23.393032    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.393444    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.395045    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.395905    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.397587    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:23.393032    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.393444    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.395045    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.395905    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.397587    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:23.401435 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:23.401447 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:23.427002 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:23.427042 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:25.955964 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:25.966813 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:25.966907 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:25.991674 3219848 cri.go:89] found id: ""
	I1217 12:04:25.991698 3219848 logs.go:282] 0 containers: []
	W1217 12:04:25.991707 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:25.991714 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:25.991828 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:26.043851 3219848 cri.go:89] found id: ""
	I1217 12:04:26.043878 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.043888 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:26.043895 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:26.043963 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:26.099675 3219848 cri.go:89] found id: ""
	I1217 12:04:26.099700 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.099708 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:26.099714 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:26.099786 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:26.129744 3219848 cri.go:89] found id: ""
	I1217 12:04:26.129768 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.129776 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:26.129783 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:26.129849 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:26.155393 3219848 cri.go:89] found id: ""
	I1217 12:04:26.155420 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.155428 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:26.155434 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:26.155492 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:26.182178 3219848 cri.go:89] found id: ""
	I1217 12:04:26.182200 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.182209 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:26.182216 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:26.182277 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:26.206976 3219848 cri.go:89] found id: ""
	I1217 12:04:26.207000 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.207009 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:26.207015 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:26.207072 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:26.231357 3219848 cri.go:89] found id: ""
	I1217 12:04:26.231383 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.231391 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:26.231400 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:26.231411 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:26.287609 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:26.287646 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:26.303654 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:26.303701 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:26.372084 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:26.363097    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.363759    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.365390    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.366039    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.367715    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:26.363097    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.363759    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.365390    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.366039    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.367715    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:26.372107 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:26.372122 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:26.398349 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:26.398386 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:28.926935 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:28.938567 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:28.938637 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:28.965018 3219848 cri.go:89] found id: ""
	I1217 12:04:28.965042 3219848 logs.go:282] 0 containers: []
	W1217 12:04:28.965050 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:28.965056 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:28.965116 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:28.993619 3219848 cri.go:89] found id: ""
	I1217 12:04:28.993646 3219848 logs.go:282] 0 containers: []
	W1217 12:04:28.993654 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:28.993661 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:28.993723 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:29.042253 3219848 cri.go:89] found id: ""
	I1217 12:04:29.042274 3219848 logs.go:282] 0 containers: []
	W1217 12:04:29.042282 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:29.042289 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:29.042347 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:29.109464 3219848 cri.go:89] found id: ""
	I1217 12:04:29.109486 3219848 logs.go:282] 0 containers: []
	W1217 12:04:29.109495 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:29.109501 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:29.109563 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:29.139820 3219848 cri.go:89] found id: ""
	I1217 12:04:29.139842 3219848 logs.go:282] 0 containers: []
	W1217 12:04:29.139850 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:29.139857 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:29.139917 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:29.165440 3219848 cri.go:89] found id: ""
	I1217 12:04:29.165465 3219848 logs.go:282] 0 containers: []
	W1217 12:04:29.165474 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:29.165481 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:29.165543 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:29.191572 3219848 cri.go:89] found id: ""
	I1217 12:04:29.191597 3219848 logs.go:282] 0 containers: []
	W1217 12:04:29.191606 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:29.191613 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:29.191673 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:29.217986 3219848 cri.go:89] found id: ""
	I1217 12:04:29.218011 3219848 logs.go:282] 0 containers: []
	W1217 12:04:29.218020 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:29.218030 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:29.218041 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:29.274933 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:29.274967 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:29.290733 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:29.290760 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:29.358661 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:29.349933    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.350736    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.352310    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.352861    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.354377    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:29.349933    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.350736    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.352310    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.352861    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.354377    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:29.358683 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:29.358697 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:29.385070 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:29.385107 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:31.914639 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:31.928018 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:31.928092 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:31.955140 3219848 cri.go:89] found id: ""
	I1217 12:04:31.955163 3219848 logs.go:282] 0 containers: []
	W1217 12:04:31.955171 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:31.955178 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:31.955252 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:31.982332 3219848 cri.go:89] found id: ""
	I1217 12:04:31.982364 3219848 logs.go:282] 0 containers: []
	W1217 12:04:31.982380 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:31.982387 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:31.982448 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:32.045708 3219848 cri.go:89] found id: ""
	I1217 12:04:32.045731 3219848 logs.go:282] 0 containers: []
	W1217 12:04:32.045740 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:32.045746 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:32.045805 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:32.093198 3219848 cri.go:89] found id: ""
	I1217 12:04:32.093220 3219848 logs.go:282] 0 containers: []
	W1217 12:04:32.093229 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:32.093242 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:32.093301 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:32.120574 3219848 cri.go:89] found id: ""
	I1217 12:04:32.120641 3219848 logs.go:282] 0 containers: []
	W1217 12:04:32.120664 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:32.120684 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:32.120772 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:32.151069 3219848 cri.go:89] found id: ""
	I1217 12:04:32.151137 3219848 logs.go:282] 0 containers: []
	W1217 12:04:32.151160 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:32.151182 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:32.151272 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:32.181226 3219848 cri.go:89] found id: ""
	I1217 12:04:32.181303 3219848 logs.go:282] 0 containers: []
	W1217 12:04:32.181326 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:32.181347 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:32.181439 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:32.207237 3219848 cri.go:89] found id: ""
	I1217 12:04:32.207295 3219848 logs.go:282] 0 containers: []
	W1217 12:04:32.207310 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:32.207324 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:32.207336 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:32.263771 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:32.263808 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:32.279666 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:32.279693 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:32.345645 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:32.336846    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.338076    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.338956    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.340462    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.340816    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:32.336846    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.338076    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.338956    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.340462    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.340816    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:32.345666 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:32.345679 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:32.371311 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:32.371347 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:34.899829 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:34.911276 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:34.911354 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:34.936056 3219848 cri.go:89] found id: ""
	I1217 12:04:34.936080 3219848 logs.go:282] 0 containers: []
	W1217 12:04:34.936089 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:34.936096 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:34.936156 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:34.962166 3219848 cri.go:89] found id: ""
	I1217 12:04:34.962192 3219848 logs.go:282] 0 containers: []
	W1217 12:04:34.962201 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:34.962207 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:34.962271 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:34.987891 3219848 cri.go:89] found id: ""
	I1217 12:04:34.987916 3219848 logs.go:282] 0 containers: []
	W1217 12:04:34.987926 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:34.987934 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:34.987994 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:35.036291 3219848 cri.go:89] found id: ""
	I1217 12:04:35.036319 3219848 logs.go:282] 0 containers: []
	W1217 12:04:35.036331 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:35.036339 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:35.036402 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:35.091997 3219848 cri.go:89] found id: ""
	I1217 12:04:35.092023 3219848 logs.go:282] 0 containers: []
	W1217 12:04:35.092041 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:35.092049 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:35.092119 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:35.126699 3219848 cri.go:89] found id: ""
	I1217 12:04:35.126721 3219848 logs.go:282] 0 containers: []
	W1217 12:04:35.126736 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:35.126743 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:35.126802 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:35.152052 3219848 cri.go:89] found id: ""
	I1217 12:04:35.152077 3219848 logs.go:282] 0 containers: []
	W1217 12:04:35.152087 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:35.152094 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:35.152156 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:35.177868 3219848 cri.go:89] found id: ""
	I1217 12:04:35.177897 3219848 logs.go:282] 0 containers: []
	W1217 12:04:35.177906 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:35.177916 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:35.177955 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:35.213172 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:35.213200 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:35.269771 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:35.269807 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:35.285802 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:35.285841 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:35.355953 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:35.345556    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.346168    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.348018    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.348336    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.351480    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:35.345556    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.346168    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.348018    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.348336    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.351480    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:35.355976 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:35.355988 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:37.883397 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:37.894032 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:37.894101 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:37.927040 3219848 cri.go:89] found id: ""
	I1217 12:04:37.927066 3219848 logs.go:282] 0 containers: []
	W1217 12:04:37.927075 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:37.927085 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:37.927150 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:37.951890 3219848 cri.go:89] found id: ""
	I1217 12:04:37.951916 3219848 logs.go:282] 0 containers: []
	W1217 12:04:37.951925 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:37.951931 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:37.951995 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:37.978258 3219848 cri.go:89] found id: ""
	I1217 12:04:37.978286 3219848 logs.go:282] 0 containers: []
	W1217 12:04:37.978295 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:37.978302 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:37.978383 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:38.032665 3219848 cri.go:89] found id: ""
	I1217 12:04:38.032689 3219848 logs.go:282] 0 containers: []
	W1217 12:04:38.032698 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:38.032705 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:38.032770 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:38.068588 3219848 cri.go:89] found id: ""
	I1217 12:04:38.068617 3219848 logs.go:282] 0 containers: []
	W1217 12:04:38.068626 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:38.068633 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:38.068703 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:38.111074 3219848 cri.go:89] found id: ""
	I1217 12:04:38.111102 3219848 logs.go:282] 0 containers: []
	W1217 12:04:38.111112 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:38.111119 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:38.111183 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:38.139962 3219848 cri.go:89] found id: ""
	I1217 12:04:38.139989 3219848 logs.go:282] 0 containers: []
	W1217 12:04:38.139998 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:38.140005 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:38.140071 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:38.165120 3219848 cri.go:89] found id: ""
	I1217 12:04:38.165147 3219848 logs.go:282] 0 containers: []
	W1217 12:04:38.165156 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:38.165165 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:38.165176 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:38.221183 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:38.221218 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:38.237532 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:38.237565 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:38.307341 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:38.299115    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.299933    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.301496    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.301856    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.303152    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:38.299115    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.299933    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.301496    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.301856    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.303152    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:38.307362 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:38.307376 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:38.333705 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:38.333739 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:40.864326 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:40.875421 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:40.875500 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:40.900554 3219848 cri.go:89] found id: ""
	I1217 12:04:40.900576 3219848 logs.go:282] 0 containers: []
	W1217 12:04:40.900586 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:40.900592 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:40.900654 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:40.926107 3219848 cri.go:89] found id: ""
	I1217 12:04:40.926134 3219848 logs.go:282] 0 containers: []
	W1217 12:04:40.926143 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:40.926151 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:40.926210 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:40.951315 3219848 cri.go:89] found id: ""
	I1217 12:04:40.951341 3219848 logs.go:282] 0 containers: []
	W1217 12:04:40.951350 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:40.951356 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:40.951414 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:40.976682 3219848 cri.go:89] found id: ""
	I1217 12:04:40.976713 3219848 logs.go:282] 0 containers: []
	W1217 12:04:40.976723 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:40.976731 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:40.976790 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:41.016365 3219848 cri.go:89] found id: ""
	I1217 12:04:41.016388 3219848 logs.go:282] 0 containers: []
	W1217 12:04:41.016396 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:41.016403 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:41.016527 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:41.081810 3219848 cri.go:89] found id: ""
	I1217 12:04:41.081838 3219848 logs.go:282] 0 containers: []
	W1217 12:04:41.081848 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:41.081856 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:41.081915 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:41.107919 3219848 cri.go:89] found id: ""
	I1217 12:04:41.107946 3219848 logs.go:282] 0 containers: []
	W1217 12:04:41.107955 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:41.107962 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:41.108032 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:41.134563 3219848 cri.go:89] found id: ""
	I1217 12:04:41.134589 3219848 logs.go:282] 0 containers: []
	W1217 12:04:41.134599 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:41.134608 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:41.134619 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:41.192325 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:41.192362 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:41.208694 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:41.208723 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:41.279184 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:41.267762    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.268617    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.272813    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.273616    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.275116    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:41.267762    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.268617    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.272813    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.273616    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.275116    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:41.279207 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:41.279221 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:41.305398 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:41.305436 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:43.838273 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:43.849251 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:43.849321 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:43.873599 3219848 cri.go:89] found id: ""
	I1217 12:04:43.873671 3219848 logs.go:282] 0 containers: []
	W1217 12:04:43.873686 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:43.873694 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:43.873756 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:43.902353 3219848 cri.go:89] found id: ""
	I1217 12:04:43.902378 3219848 logs.go:282] 0 containers: []
	W1217 12:04:43.902388 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:43.902395 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:43.902486 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:43.928175 3219848 cri.go:89] found id: ""
	I1217 12:04:43.928202 3219848 logs.go:282] 0 containers: []
	W1217 12:04:43.928213 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:43.928220 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:43.928334 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:43.956883 3219848 cri.go:89] found id: ""
	I1217 12:04:43.956912 3219848 logs.go:282] 0 containers: []
	W1217 12:04:43.956921 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:43.956927 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:43.956996 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:43.982931 3219848 cri.go:89] found id: ""
	I1217 12:04:43.982968 3219848 logs.go:282] 0 containers: []
	W1217 12:04:43.982979 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:43.982986 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:43.983053 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:44.030268 3219848 cri.go:89] found id: ""
	I1217 12:04:44.030294 3219848 logs.go:282] 0 containers: []
	W1217 12:04:44.030304 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:44.030311 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:44.030388 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:44.082991 3219848 cri.go:89] found id: ""
	I1217 12:04:44.083021 3219848 logs.go:282] 0 containers: []
	W1217 12:04:44.083042 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:44.083049 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:44.083140 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:44.113120 3219848 cri.go:89] found id: ""
	I1217 12:04:44.113165 3219848 logs.go:282] 0 containers: []
	W1217 12:04:44.113175 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:44.113185 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:44.113204 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:44.172933 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:44.172970 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:44.189039 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:44.189066 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:44.257898 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:44.249336    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.250068    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.251815    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.252362    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.254049    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:44.249336    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.250068    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.251815    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.252362    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.254049    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:44.257924 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:44.257937 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:44.283680 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:44.283715 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:46.821352 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:46.832441 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:46.832520 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:46.858364 3219848 cri.go:89] found id: ""
	I1217 12:04:46.858390 3219848 logs.go:282] 0 containers: []
	W1217 12:04:46.858400 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:46.858407 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:46.858488 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:46.882836 3219848 cri.go:89] found id: ""
	I1217 12:04:46.882868 3219848 logs.go:282] 0 containers: []
	W1217 12:04:46.882876 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:46.882883 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:46.882952 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:46.907815 3219848 cri.go:89] found id: ""
	I1217 12:04:46.907852 3219848 logs.go:282] 0 containers: []
	W1217 12:04:46.907861 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:46.907888 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:46.907972 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:46.933329 3219848 cri.go:89] found id: ""
	I1217 12:04:46.933353 3219848 logs.go:282] 0 containers: []
	W1217 12:04:46.933363 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:46.933377 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:46.933445 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:46.959520 3219848 cri.go:89] found id: ""
	I1217 12:04:46.959546 3219848 logs.go:282] 0 containers: []
	W1217 12:04:46.959555 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:46.959562 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:46.959621 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:46.986527 3219848 cri.go:89] found id: ""
	I1217 12:04:46.986551 3219848 logs.go:282] 0 containers: []
	W1217 12:04:46.986561 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:46.986567 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:46.986627 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:47.034742 3219848 cri.go:89] found id: ""
	I1217 12:04:47.034765 3219848 logs.go:282] 0 containers: []
	W1217 12:04:47.034775 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:47.034781 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:47.034838 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:47.072115 3219848 cri.go:89] found id: ""
	I1217 12:04:47.072143 3219848 logs.go:282] 0 containers: []
	W1217 12:04:47.072152 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:47.072161 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:47.072173 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:47.138106 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:47.138141 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:47.156338 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:47.156381 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:47.224864 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:47.215946    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.216453    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.218361    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.219127    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.220895    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:47.215946    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.216453    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.218361    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.219127    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.220895    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:47.224889 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:47.224900 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:47.250608 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:47.250644 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:49.780985 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:49.791927 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:49.792002 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:49.817502 3219848 cri.go:89] found id: ""
	I1217 12:04:49.817526 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.817536 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:49.817542 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:49.817621 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:49.844464 3219848 cri.go:89] found id: ""
	I1217 12:04:49.844490 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.844499 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:49.844506 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:49.844614 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:49.874956 3219848 cri.go:89] found id: ""
	I1217 12:04:49.874982 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.874991 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:49.874998 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:49.875079 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:49.904772 3219848 cri.go:89] found id: ""
	I1217 12:04:49.904795 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.904804 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:49.904810 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:49.904872 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:49.934337 3219848 cri.go:89] found id: ""
	I1217 12:04:49.934362 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.934372 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:49.934379 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:49.934472 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:49.959338 3219848 cri.go:89] found id: ""
	I1217 12:04:49.959362 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.959371 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:49.959378 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:49.959481 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:49.984578 3219848 cri.go:89] found id: ""
	I1217 12:04:49.984606 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.984614 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:49.984621 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:49.984679 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:50.043309 3219848 cri.go:89] found id: ""
	I1217 12:04:50.043395 3219848 logs.go:282] 0 containers: []
	W1217 12:04:50.043419 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:50.043456 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:50.043486 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:50.135752 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:50.127538    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.128073    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.129753    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.130124    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.131773    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:50.127538    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.128073    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.129753    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.130124    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.131773    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:50.135777 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:50.135792 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:50.162030 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:50.162067 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:50.196447 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:50.196478 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:50.254281 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:50.254318 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:52.772408 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:52.783553 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:52.783633 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:52.820008 3219848 cri.go:89] found id: ""
	I1217 12:04:52.820043 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.820058 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:52.820065 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:52.820129 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:52.844905 3219848 cri.go:89] found id: ""
	I1217 12:04:52.844941 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.844949 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:52.844956 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:52.845029 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:52.869543 3219848 cri.go:89] found id: ""
	I1217 12:04:52.869569 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.869586 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:52.869622 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:52.869698 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:52.894131 3219848 cri.go:89] found id: ""
	I1217 12:04:52.894160 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.894170 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:52.894177 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:52.894266 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:52.921694 3219848 cri.go:89] found id: ""
	I1217 12:04:52.921719 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.921729 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:52.921736 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:52.921795 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:52.947377 3219848 cri.go:89] found id: ""
	I1217 12:04:52.947411 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.947421 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:52.947452 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:52.947531 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:52.972742 3219848 cri.go:89] found id: ""
	I1217 12:04:52.972768 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.972777 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:52.972787 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:52.972866 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:53.016484 3219848 cri.go:89] found id: ""
	I1217 12:04:53.016566 3219848 logs.go:282] 0 containers: []
	W1217 12:04:53.016588 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:53.016612 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:53.016657 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:53.091083 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:53.091153 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:53.109051 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:53.109075 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:53.174985 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:53.166259    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.167099    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.168974    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.169308    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.170842    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:53.166259    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.167099    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.168974    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.169308    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.170842    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:53.175008 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:53.175021 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:53.201645 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:53.201680 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:55.729262 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:55.742969 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:55.743043 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:55.772352 3219848 cri.go:89] found id: ""
	I1217 12:04:55.772374 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.772383 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:55.772389 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:55.772461 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:55.799085 3219848 cri.go:89] found id: ""
	I1217 12:04:55.799111 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.799120 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:55.799126 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:55.799191 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:55.825805 3219848 cri.go:89] found id: ""
	I1217 12:04:55.825830 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.825839 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:55.825846 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:55.825907 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:55.855875 3219848 cri.go:89] found id: ""
	I1217 12:04:55.855964 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.855979 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:55.855987 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:55.856055 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:55.881512 3219848 cri.go:89] found id: ""
	I1217 12:04:55.881539 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.881548 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:55.881555 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:55.881615 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:55.911117 3219848 cri.go:89] found id: ""
	I1217 12:04:55.911149 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.911158 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:55.911165 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:55.911236 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:55.936738 3219848 cri.go:89] found id: ""
	I1217 12:04:55.936774 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.936783 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:55.936790 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:55.936865 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:55.962878 3219848 cri.go:89] found id: ""
	I1217 12:04:55.962904 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.962918 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:55.962937 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:55.962950 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:55.991943 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:55.991988 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:56.062887 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:56.062922 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:56.129315 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:56.129356 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:56.145986 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:56.146013 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:56.214623 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:56.205795    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.206560    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.208125    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.208507    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.210116    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:56.205795    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.206560    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.208125    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.208507    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.210116    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:58.715974 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:58.727395 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:58.727466 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:58.751936 3219848 cri.go:89] found id: ""
	I1217 12:04:58.751961 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.751970 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:58.751977 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:58.752036 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:58.778416 3219848 cri.go:89] found id: ""
	I1217 12:04:58.778439 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.778447 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:58.778454 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:58.778517 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:58.806136 3219848 cri.go:89] found id: ""
	I1217 12:04:58.806160 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.806169 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:58.806175 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:58.806233 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:58.835276 3219848 cri.go:89] found id: ""
	I1217 12:04:58.835311 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.835321 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:58.835328 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:58.835396 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:58.862517 3219848 cri.go:89] found id: ""
	I1217 12:04:58.862596 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.862612 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:58.862620 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:58.862695 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:58.888027 3219848 cri.go:89] found id: ""
	I1217 12:04:58.888055 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.888065 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:58.888072 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:58.888156 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:58.913027 3219848 cri.go:89] found id: ""
	I1217 12:04:58.913106 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.913123 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:58.913132 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:58.913210 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:58.938554 3219848 cri.go:89] found id: ""
	I1217 12:04:58.938578 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.938587 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:58.938599 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:58.938611 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:58.995142 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:58.995175 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:59.026309 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:59.026388 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:59.124135 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:59.115677    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.117093    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.117405    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.118755    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.119195    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:59.115677    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.117093    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.117405    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.118755    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.119195    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:59.124157 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:59.124170 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:59.149882 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:59.149925 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:01.680518 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:01.692630 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:01.692709 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:01.721621 3219848 cri.go:89] found id: ""
	I1217 12:05:01.721647 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.721656 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:01.721664 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:01.721731 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:01.748186 3219848 cri.go:89] found id: ""
	I1217 12:05:01.748213 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.748232 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:01.748239 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:01.748310 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:01.774670 3219848 cri.go:89] found id: ""
	I1217 12:05:01.774694 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.774703 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:01.774709 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:01.774770 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:01.800533 3219848 cri.go:89] found id: ""
	I1217 12:05:01.800609 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.800635 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:01.800649 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:01.800726 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:01.832193 3219848 cri.go:89] found id: ""
	I1217 12:05:01.832221 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.832230 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:01.832238 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:01.832314 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:01.859699 3219848 cri.go:89] found id: ""
	I1217 12:05:01.859733 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.859743 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:01.859750 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:01.859825 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:01.891844 3219848 cri.go:89] found id: ""
	I1217 12:05:01.891869 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.891893 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:01.891901 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:01.891988 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:01.922765 3219848 cri.go:89] found id: ""
	I1217 12:05:01.922791 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.922801 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:01.922811 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:01.922821 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:01.984618 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:01.984654 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:02.003531 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:02.003573 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:02.119039 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:02.109047    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.109431    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.111503    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.112283    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.113931    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:02.109047    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.109431    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.111503    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.112283    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.113931    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:02.119062 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:02.119074 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:02.145052 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:02.145090 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:04.675110 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:04.686658 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:04.686731 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:04.714143 3219848 cri.go:89] found id: ""
	I1217 12:05:04.714169 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.714178 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:04.714185 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:04.714246 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:04.741446 3219848 cri.go:89] found id: ""
	I1217 12:05:04.741472 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.741481 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:04.741488 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:04.741549 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:04.771197 3219848 cri.go:89] found id: ""
	I1217 12:05:04.771224 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.771234 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:04.771241 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:04.771305 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:04.801798 3219848 cri.go:89] found id: ""
	I1217 12:05:04.801824 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.801834 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:04.801840 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:04.801901 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:04.827212 3219848 cri.go:89] found id: ""
	I1217 12:05:04.827240 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.827249 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:04.827257 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:04.827322 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:04.852794 3219848 cri.go:89] found id: ""
	I1217 12:05:04.852821 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.852831 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:04.852838 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:04.852898 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:04.879034 3219848 cri.go:89] found id: ""
	I1217 12:05:04.879058 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.879069 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:04.879075 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:04.879134 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:04.904782 3219848 cri.go:89] found id: ""
	I1217 12:05:04.904806 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.904814 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:04.904823 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:04.904833 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:04.961550 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:04.961581 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:04.977831 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:04.977861 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:05.101127 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:05.083862    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.093276    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.094908    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.095507    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.097102    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:05.083862    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.093276    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.094908    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.095507    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.097102    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:05.101155 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:05.101168 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:05.128517 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:05.128550 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:07.660217 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:07.670837 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:07.670907 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:07.696773 3219848 cri.go:89] found id: ""
	I1217 12:05:07.696800 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.696809 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:07.696816 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:07.696873 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:07.722665 3219848 cri.go:89] found id: ""
	I1217 12:05:07.722688 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.722697 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:07.722703 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:07.722770 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:07.748882 3219848 cri.go:89] found id: ""
	I1217 12:05:07.748907 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.748916 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:07.748922 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:07.748983 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:07.777951 3219848 cri.go:89] found id: ""
	I1217 12:05:07.777976 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.777985 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:07.777992 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:07.778052 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:07.807386 3219848 cri.go:89] found id: ""
	I1217 12:05:07.807414 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.807423 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:07.807430 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:07.807492 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:07.836910 3219848 cri.go:89] found id: ""
	I1217 12:05:07.836938 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.836947 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:07.836954 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:07.837012 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:07.861301 3219848 cri.go:89] found id: ""
	I1217 12:05:07.861327 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.861337 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:07.861343 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:07.861402 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:07.885389 3219848 cri.go:89] found id: ""
	I1217 12:05:07.885412 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.885422 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:07.885431 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:07.885444 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:07.940922 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:07.940954 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:07.956764 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:07.956792 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:08.040092 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:08.023500    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.024045    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.030582    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.031321    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.035763    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:08.023500    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.024045    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.030582    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.031321    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.035763    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:08.040167 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:08.040195 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:08.076595 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:08.076674 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:10.614548 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:10.625273 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:10.625344 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:10.649744 3219848 cri.go:89] found id: ""
	I1217 12:05:10.649774 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.649782 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:10.649789 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:10.649847 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:10.673909 3219848 cri.go:89] found id: ""
	I1217 12:05:10.673936 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.673945 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:10.673952 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:10.674010 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:10.699817 3219848 cri.go:89] found id: ""
	I1217 12:05:10.699840 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.699849 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:10.699855 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:10.699914 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:10.724608 3219848 cri.go:89] found id: ""
	I1217 12:05:10.724630 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.724638 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:10.724645 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:10.724702 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:10.756858 3219848 cri.go:89] found id: ""
	I1217 12:05:10.756883 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.756892 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:10.756899 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:10.756959 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:10.787011 3219848 cri.go:89] found id: ""
	I1217 12:05:10.787037 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.787046 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:10.787052 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:10.787111 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:10.816658 3219848 cri.go:89] found id: ""
	I1217 12:05:10.816683 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.816691 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:10.816698 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:10.816757 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:10.841817 3219848 cri.go:89] found id: ""
	I1217 12:05:10.841882 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.841899 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:10.841909 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:10.841920 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:10.899952 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:10.899994 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:10.915585 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:10.915615 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:10.983597 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:10.975197    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.975853    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.977490    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.978147    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.979658    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:10.975197    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.975853    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.977490    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.978147    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.979658    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:10.983619 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:10.983636 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:11.013827 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:11.013865 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:13.590017 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:13.601224 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:13.601300 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:13.630748 3219848 cri.go:89] found id: ""
	I1217 12:05:13.630771 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.630781 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:13.630788 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:13.630845 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:13.659125 3219848 cri.go:89] found id: ""
	I1217 12:05:13.659150 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.659160 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:13.659166 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:13.659224 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:13.689040 3219848 cri.go:89] found id: ""
	I1217 12:05:13.689066 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.689075 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:13.689082 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:13.689149 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:13.713917 3219848 cri.go:89] found id: ""
	I1217 12:05:13.713941 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.713949 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:13.713956 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:13.714016 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:13.738663 3219848 cri.go:89] found id: ""
	I1217 12:05:13.738686 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.738695 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:13.738701 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:13.738759 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:13.762897 3219848 cri.go:89] found id: ""
	I1217 12:05:13.762922 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.762931 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:13.762938 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:13.762995 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:13.791695 3219848 cri.go:89] found id: ""
	I1217 12:05:13.791720 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.791736 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:13.791743 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:13.791800 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:13.821207 3219848 cri.go:89] found id: ""
	I1217 12:05:13.821230 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.821239 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:13.821248 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:13.821259 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:13.848837 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:13.848867 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:13.906239 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:13.906278 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:13.921882 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:13.921917 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:13.991574 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:13.982172    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.983086    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.985111    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.985659    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.986629    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:13.982172    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.983086    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.985111    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.985659    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.986629    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:13.991596 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:13.991609 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:16.525032 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:16.535486 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:16.535556 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:16.561699 3219848 cri.go:89] found id: ""
	I1217 12:05:16.561721 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.561730 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:16.561736 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:16.561792 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:16.586264 3219848 cri.go:89] found id: ""
	I1217 12:05:16.586287 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.586296 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:16.586303 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:16.586360 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:16.611385 3219848 cri.go:89] found id: ""
	I1217 12:05:16.611409 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.611418 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:16.611425 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:16.611485 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:16.636230 3219848 cri.go:89] found id: ""
	I1217 12:05:16.636256 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.636267 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:16.636274 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:16.636332 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:16.660919 3219848 cri.go:89] found id: ""
	I1217 12:05:16.660942 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.660950 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:16.660956 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:16.661013 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:16.688962 3219848 cri.go:89] found id: ""
	I1217 12:05:16.688987 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.688996 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:16.689003 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:16.689070 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:16.719405 3219848 cri.go:89] found id: ""
	I1217 12:05:16.719428 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.719437 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:16.719443 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:16.719502 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:16.745166 3219848 cri.go:89] found id: ""
	I1217 12:05:16.745192 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.745201 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:16.745211 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:16.745223 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:16.771975 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:16.772014 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:16.804149 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:16.804180 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:16.861212 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:16.861249 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:16.877226 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:16.877257 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:16.943896 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:16.935292    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.935946    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.937663    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.938200    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.939861    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:16.935292    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.935946    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.937663    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.938200    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.939861    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:19.444922 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:19.455525 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:19.455598 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:19.480970 3219848 cri.go:89] found id: ""
	I1217 12:05:19.480995 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.481006 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:19.481017 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:19.481079 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:19.506235 3219848 cri.go:89] found id: ""
	I1217 12:05:19.506258 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.506267 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:19.506274 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:19.506333 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:19.532063 3219848 cri.go:89] found id: ""
	I1217 12:05:19.532086 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.532095 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:19.532105 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:19.532165 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:19.562427 3219848 cri.go:89] found id: ""
	I1217 12:05:19.562450 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.562460 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:19.562466 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:19.562524 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:19.587869 3219848 cri.go:89] found id: ""
	I1217 12:05:19.587903 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.587912 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:19.587919 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:19.587990 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:19.612889 3219848 cri.go:89] found id: ""
	I1217 12:05:19.612916 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.612925 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:19.612932 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:19.612990 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:19.637949 3219848 cri.go:89] found id: ""
	I1217 12:05:19.637972 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.637980 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:19.637992 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:19.638053 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:19.666633 3219848 cri.go:89] found id: ""
	I1217 12:05:19.666703 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.666740 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:19.666769 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:19.666798 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:19.726394 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:19.726430 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:19.742581 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:19.742662 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:19.807145 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:19.798144    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.799463    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.800143    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.801047    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.802652    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:19.798144    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.799463    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.800143    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.801047    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.802652    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:19.807174 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:19.807187 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:19.832758 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:19.832792 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:22.366107 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:22.376592 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:22.376666 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:22.401822 3219848 cri.go:89] found id: ""
	I1217 12:05:22.401847 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.401857 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:22.401863 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:22.401921 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:22.425903 3219848 cri.go:89] found id: ""
	I1217 12:05:22.425927 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.425936 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:22.425943 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:22.426008 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:22.454459 3219848 cri.go:89] found id: ""
	I1217 12:05:22.454484 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.454493 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:22.454499 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:22.454559 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:22.479178 3219848 cri.go:89] found id: ""
	I1217 12:05:22.479202 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.479212 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:22.479219 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:22.479276 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:22.505859 3219848 cri.go:89] found id: ""
	I1217 12:05:22.505885 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.505900 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:22.505908 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:22.505995 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:22.531485 3219848 cri.go:89] found id: ""
	I1217 12:05:22.531506 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.531515 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:22.531523 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:22.531583 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:22.558267 3219848 cri.go:89] found id: ""
	I1217 12:05:22.558343 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.558360 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:22.558367 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:22.558427 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:22.588380 3219848 cri.go:89] found id: ""
	I1217 12:05:22.588431 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.588442 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:22.588451 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:22.588463 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:22.647590 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:22.647629 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:22.665568 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:22.665597 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:22.738273 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:22.729900    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.730477    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.732137    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.732564    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.734423    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:22.729900    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.730477    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.732137    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.732564    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.734423    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:22.738298 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:22.738310 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:22.764468 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:22.764503 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:25.296756 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:25.320288 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:25.320356 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:25.347938 3219848 cri.go:89] found id: ""
	I1217 12:05:25.347959 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.347967 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:25.347973 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:25.348030 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:25.375407 3219848 cri.go:89] found id: ""
	I1217 12:05:25.375428 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.375438 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:25.375444 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:25.375501 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:25.400165 3219848 cri.go:89] found id: ""
	I1217 12:05:25.400187 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.400195 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:25.400202 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:25.400266 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:25.428203 3219848 cri.go:89] found id: ""
	I1217 12:05:25.428229 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.428240 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:25.428247 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:25.428307 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:25.454651 3219848 cri.go:89] found id: ""
	I1217 12:05:25.454675 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.454685 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:25.454692 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:25.454754 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:25.478961 3219848 cri.go:89] found id: ""
	I1217 12:05:25.478987 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.478996 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:25.479003 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:25.479088 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:25.508637 3219848 cri.go:89] found id: ""
	I1217 12:05:25.508661 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.508670 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:25.508676 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:25.508782 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:25.534245 3219848 cri.go:89] found id: ""
	I1217 12:05:25.534270 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.534279 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:25.534289 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:25.534306 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:25.569632 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:25.569662 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:25.625748 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:25.625783 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:25.641383 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:25.641409 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:25.709135 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:25.700728    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.701299    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.703107    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.703541    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.705177    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:25.700728    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.701299    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.703107    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.703541    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.705177    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:25.709156 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:25.709168 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:28.233802 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:28.244795 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:28.244872 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:28.291390 3219848 cri.go:89] found id: ""
	I1217 12:05:28.291412 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.291421 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:28.291427 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:28.291488 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:28.355887 3219848 cri.go:89] found id: ""
	I1217 12:05:28.355909 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.355917 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:28.355924 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:28.355983 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:28.381610 3219848 cri.go:89] found id: ""
	I1217 12:05:28.381633 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.381641 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:28.381647 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:28.381707 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:28.407516 3219848 cri.go:89] found id: ""
	I1217 12:05:28.407544 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.407553 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:28.407560 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:28.407622 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:28.436914 3219848 cri.go:89] found id: ""
	I1217 12:05:28.436982 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.437006 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:28.437021 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:28.437098 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:28.461189 3219848 cri.go:89] found id: ""
	I1217 12:05:28.461258 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.461283 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:28.461298 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:28.461373 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:28.490913 3219848 cri.go:89] found id: ""
	I1217 12:05:28.490948 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.490958 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:28.490965 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:28.491033 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:28.521566 3219848 cri.go:89] found id: ""
	I1217 12:05:28.521589 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.521599 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:28.521610 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:28.521622 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:28.577123 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:28.577159 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:28.593088 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:28.593119 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:28.655447 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:28.646846   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.647657   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.649169   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.649707   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.651192   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:28.646846   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.647657   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.649169   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.649707   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.651192   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:28.655472 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:28.655484 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:28.680532 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:28.680566 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:31.213979 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:31.224716 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:31.224784 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:31.257050 3219848 cri.go:89] found id: ""
	I1217 12:05:31.257071 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.257079 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:31.257085 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:31.257141 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:31.315656 3219848 cri.go:89] found id: ""
	I1217 12:05:31.315677 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.315686 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:31.315692 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:31.315746 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:31.349340 3219848 cri.go:89] found id: ""
	I1217 12:05:31.349360 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.349369 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:31.349375 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:31.349432 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:31.374728 3219848 cri.go:89] found id: ""
	I1217 12:05:31.374755 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.374764 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:31.374771 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:31.374833 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:31.401386 3219848 cri.go:89] found id: ""
	I1217 12:05:31.401422 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.401432 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:31.401439 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:31.401511 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:31.427234 3219848 cri.go:89] found id: ""
	I1217 12:05:31.427260 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.427270 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:31.427277 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:31.427338 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:31.452628 3219848 cri.go:89] found id: ""
	I1217 12:05:31.452666 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.452676 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:31.452684 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:31.452756 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:31.476684 3219848 cri.go:89] found id: ""
	I1217 12:05:31.476717 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.476725 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:31.476735 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:31.476745 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:31.533895 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:31.533928 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:31.549405 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:31.549433 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:31.617988 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:31.609983   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.610706   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.612313   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.612915   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.613854   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:31.609983   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.610706   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.612313   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.612915   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.613854   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:31.618022 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:31.618051 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:31.643544 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:31.643575 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:34.173214 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:34.183798 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:34.183881 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:34.208274 3219848 cri.go:89] found id: ""
	I1217 12:05:34.208299 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.208309 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:34.208315 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:34.208377 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:34.232844 3219848 cri.go:89] found id: ""
	I1217 12:05:34.232870 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.232879 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:34.232886 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:34.232947 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:34.298630 3219848 cri.go:89] found id: ""
	I1217 12:05:34.298656 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.298665 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:34.298672 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:34.298732 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:34.352614 3219848 cri.go:89] found id: ""
	I1217 12:05:34.352657 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.352672 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:34.352679 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:34.352745 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:34.378134 3219848 cri.go:89] found id: ""
	I1217 12:05:34.378156 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.378165 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:34.378171 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:34.378234 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:34.402637 3219848 cri.go:89] found id: ""
	I1217 12:05:34.402660 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.402668 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:34.402675 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:34.402758 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:34.428834 3219848 cri.go:89] found id: ""
	I1217 12:05:34.428906 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.428941 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:34.428948 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:34.429006 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:34.459618 3219848 cri.go:89] found id: ""
	I1217 12:05:34.459641 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.459654 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:34.459663 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:34.459674 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:34.514834 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:34.514867 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:34.531691 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:34.531717 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:34.603404 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:34.594905   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.595653   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.597085   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.597664   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.599260   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:34.594905   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.595653   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.597085   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.597664   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.599260   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:34.603478 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:34.603498 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:34.629092 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:34.629131 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:37.158533 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:37.170305 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:37.170377 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:37.195895 3219848 cri.go:89] found id: ""
	I1217 12:05:37.195920 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.195929 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:37.195936 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:37.195994 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:37.221126 3219848 cri.go:89] found id: ""
	I1217 12:05:37.221153 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.221162 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:37.221170 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:37.221228 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:37.246560 3219848 cri.go:89] found id: ""
	I1217 12:05:37.246584 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.246593 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:37.246600 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:37.246663 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:37.289595 3219848 cri.go:89] found id: ""
	I1217 12:05:37.289620 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.289629 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:37.289635 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:37.289707 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:37.324903 3219848 cri.go:89] found id: ""
	I1217 12:05:37.324923 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.324932 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:37.324939 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:37.324997 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:37.361173 3219848 cri.go:89] found id: ""
	I1217 12:05:37.361194 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.361204 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:37.361210 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:37.361269 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:37.389438 3219848 cri.go:89] found id: ""
	I1217 12:05:37.389461 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.389470 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:37.389476 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:37.389537 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:37.414662 3219848 cri.go:89] found id: ""
	I1217 12:05:37.414700 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.414710 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:37.414719 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:37.414731 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:37.478614 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:37.471157   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.471613   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.473091   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.473470   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.474881   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:37.471157   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.471613   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.473091   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.473470   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.474881   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:37.478647 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:37.478661 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:37.504204 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:37.504241 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:37.535207 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:37.535282 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:37.594334 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:37.594382 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:40.110392 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:40.122282 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:40.122363 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:40.148146 3219848 cri.go:89] found id: ""
	I1217 12:05:40.148171 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.148180 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:40.148186 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:40.148248 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:40.175122 3219848 cri.go:89] found id: ""
	I1217 12:05:40.175149 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.175158 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:40.175164 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:40.175224 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:40.201606 3219848 cri.go:89] found id: ""
	I1217 12:05:40.201629 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.201638 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:40.201644 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:40.201702 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:40.227663 3219848 cri.go:89] found id: ""
	I1217 12:05:40.227688 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.227697 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:40.227704 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:40.227760 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:40.279855 3219848 cri.go:89] found id: ""
	I1217 12:05:40.279881 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.279889 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:40.279896 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:40.279955 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:40.341349 3219848 cri.go:89] found id: ""
	I1217 12:05:40.341372 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.341381 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:40.341388 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:40.341445 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:40.366250 3219848 cri.go:89] found id: ""
	I1217 12:05:40.366276 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.366285 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:40.366292 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:40.366374 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:40.390064 3219848 cri.go:89] found id: ""
	I1217 12:05:40.390091 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.390100 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:40.390112 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:40.390143 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:40.417840 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:40.417866 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:40.474223 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:40.474260 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:40.489995 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:40.490025 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:40.558792 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:40.550389   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.551049   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.552779   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.553286   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.554780   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:40.550389   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.551049   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.552779   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.553286   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.554780   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:40.558816 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:40.558829 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:43.085654 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:43.096719 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:43.096788 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:43.122755 3219848 cri.go:89] found id: ""
	I1217 12:05:43.122822 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.122846 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:43.122862 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:43.122942 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:43.149072 3219848 cri.go:89] found id: ""
	I1217 12:05:43.149097 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.149106 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:43.149113 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:43.149192 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:43.175863 3219848 cri.go:89] found id: ""
	I1217 12:05:43.175889 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.175897 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:43.175929 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:43.176015 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:43.202533 3219848 cri.go:89] found id: ""
	I1217 12:05:43.202572 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.202580 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:43.202587 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:43.202649 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:43.227233 3219848 cri.go:89] found id: ""
	I1217 12:05:43.227307 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.227331 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:43.227352 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:43.227449 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:43.293609 3219848 cri.go:89] found id: ""
	I1217 12:05:43.293677 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.293701 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:43.293723 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:43.293807 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:43.345463 3219848 cri.go:89] found id: ""
	I1217 12:05:43.345537 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.345563 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:43.345584 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:43.345692 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:43.376719 3219848 cri.go:89] found id: ""
	I1217 12:05:43.376754 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.376763 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:43.376772 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:43.376785 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:43.434376 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:43.434408 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:43.449996 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:43.450023 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:43.518159 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:43.509135   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.509732   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.511431   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.511909   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.513376   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:43.509135   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.509732   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.511431   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.511909   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.513376   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:43.518179 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:43.518193 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:43.544448 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:43.544487 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:46.079862 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:46.091017 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:46.091085 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:46.116886 3219848 cri.go:89] found id: ""
	I1217 12:05:46.116913 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.116924 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:46.116939 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:46.117008 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:46.142188 3219848 cri.go:89] found id: ""
	I1217 12:05:46.142216 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.142227 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:46.142234 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:46.142296 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:46.168033 3219848 cri.go:89] found id: ""
	I1217 12:05:46.168059 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.168068 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:46.168075 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:46.168141 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:46.194149 3219848 cri.go:89] found id: ""
	I1217 12:05:46.194178 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.194188 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:46.194197 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:46.194257 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:46.220319 3219848 cri.go:89] found id: ""
	I1217 12:05:46.220345 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.220354 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:46.220360 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:46.220456 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:46.246104 3219848 cri.go:89] found id: ""
	I1217 12:05:46.246131 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.246140 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:46.246147 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:46.246208 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:46.281496 3219848 cri.go:89] found id: ""
	I1217 12:05:46.281520 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.281528 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:46.281535 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:46.281597 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:46.327477 3219848 cri.go:89] found id: ""
	I1217 12:05:46.327558 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.327582 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:46.327625 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:46.327653 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:46.407413 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:46.407451 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:46.423419 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:46.423448 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:46.489920 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:46.481686   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.482194   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.483773   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.484191   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.485671   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:46.481686   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.482194   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.483773   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.484191   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.485671   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:46.489945 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:46.489959 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:46.516022 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:46.516061 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:49.045130 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:49.056135 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:49.056216 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:49.080547 3219848 cri.go:89] found id: ""
	I1217 12:05:49.080568 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.080577 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:49.080583 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:49.080645 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:49.106805 3219848 cri.go:89] found id: ""
	I1217 12:05:49.106834 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.106844 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:49.106850 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:49.106911 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:49.132478 3219848 cri.go:89] found id: ""
	I1217 12:05:49.132501 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.132509 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:49.132515 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:49.132579 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:49.159868 3219848 cri.go:89] found id: ""
	I1217 12:05:49.159896 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.159906 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:49.159912 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:49.159971 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:49.187789 3219848 cri.go:89] found id: ""
	I1217 12:05:49.187814 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.187835 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:49.187843 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:49.187902 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:49.213461 3219848 cri.go:89] found id: ""
	I1217 12:05:49.213489 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.213498 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:49.213505 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:49.213612 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:49.240191 3219848 cri.go:89] found id: ""
	I1217 12:05:49.240220 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.240229 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:49.240247 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:49.240343 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:49.295243 3219848 cri.go:89] found id: ""
	I1217 12:05:49.295291 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.295306 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:49.295319 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:49.295331 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:49.359872 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:49.359903 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:49.427963 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:49.428002 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:49.444788 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:49.444818 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:49.510631 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:49.502008   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.502410   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.504142   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.504867   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.506554   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:49.502008   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.502410   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.504142   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.504867   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.506554   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:49.510652 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:49.510663 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:52.036765 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:52.049010 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:52.049084 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:52.075472 3219848 cri.go:89] found id: ""
	I1217 12:05:52.075500 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.075510 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:52.075517 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:52.075582 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:52.105198 3219848 cri.go:89] found id: ""
	I1217 12:05:52.105222 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.105231 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:52.105238 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:52.105295 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:52.133404 3219848 cri.go:89] found id: ""
	I1217 12:05:52.133428 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.133439 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:52.133445 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:52.133507 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:52.158170 3219848 cri.go:89] found id: ""
	I1217 12:05:52.158195 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.158205 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:52.158212 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:52.158270 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:52.182679 3219848 cri.go:89] found id: ""
	I1217 12:05:52.182704 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.182713 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:52.182720 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:52.182778 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:52.211743 3219848 cri.go:89] found id: ""
	I1217 12:05:52.211769 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.211778 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:52.211785 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:52.211845 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:52.237892 3219848 cri.go:89] found id: ""
	I1217 12:05:52.237918 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.237927 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:52.237933 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:52.237990 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:52.281029 3219848 cri.go:89] found id: ""
	I1217 12:05:52.281055 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.281063 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:52.281073 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:52.281089 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:52.374683 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:52.374721 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:52.390831 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:52.390863 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:52.454058 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:52.444629   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.445414   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.447158   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.447874   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.449604   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:52.444629   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.445414   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.447158   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.447874   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.449604   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:52.454081 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:52.454095 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:52.479410 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:52.479443 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:55.007287 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:55.021703 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:55.021785 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:55.047053 3219848 cri.go:89] found id: ""
	I1217 12:05:55.047076 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.047085 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:55.047092 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:55.047149 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:55.074641 3219848 cri.go:89] found id: ""
	I1217 12:05:55.074665 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.074674 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:55.074680 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:55.074742 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:55.103484 3219848 cri.go:89] found id: ""
	I1217 12:05:55.103512 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.103521 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:55.103527 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:55.103586 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:55.132461 3219848 cri.go:89] found id: ""
	I1217 12:05:55.132487 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.132497 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:55.132503 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:55.132561 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:55.157595 3219848 cri.go:89] found id: ""
	I1217 12:05:55.157618 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.157626 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:55.157632 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:55.157694 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:55.187334 3219848 cri.go:89] found id: ""
	I1217 12:05:55.187354 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.187364 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:55.187371 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:55.187529 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:55.212469 3219848 cri.go:89] found id: ""
	I1217 12:05:55.212492 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.212501 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:55.212508 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:55.212567 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:55.238155 3219848 cri.go:89] found id: ""
	I1217 12:05:55.238188 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.238198 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:55.238208 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:55.238237 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:55.361507 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:55.352214   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.352982   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.354653   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.355179   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.356793   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:55.352214   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.352982   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.354653   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.355179   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.356793   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:55.361529 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:55.361542 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:55.387722 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:55.387760 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:55.415663 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:55.415688 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:55.471304 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:55.471342 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:57.988615 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:57.999088 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:57.999163 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:58.029910 3219848 cri.go:89] found id: ""
	I1217 12:05:58.029938 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.029948 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:58.029955 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:58.030021 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:58.056383 3219848 cri.go:89] found id: ""
	I1217 12:05:58.056409 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.056461 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:58.056468 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:58.056526 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:58.082442 3219848 cri.go:89] found id: ""
	I1217 12:05:58.082468 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.082477 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:58.082483 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:58.082543 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:58.110467 3219848 cri.go:89] found id: ""
	I1217 12:05:58.110491 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.110500 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:58.110507 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:58.110574 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:58.136852 3219848 cri.go:89] found id: ""
	I1217 12:05:58.136879 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.136888 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:58.136895 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:58.136976 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:58.163746 3219848 cri.go:89] found id: ""
	I1217 12:05:58.163772 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.163782 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:58.163788 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:58.163847 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:58.190425 3219848 cri.go:89] found id: ""
	I1217 12:05:58.190451 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.190460 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:58.190467 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:58.190529 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:58.220315 3219848 cri.go:89] found id: ""
	I1217 12:05:58.220338 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.220347 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:58.220358 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:58.220368 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:58.290204 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:58.290287 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:58.323039 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:58.323120 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:58.402482 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:58.393790   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.394615   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.396214   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.396884   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.398347   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:58.393790   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.394615   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.396214   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.396884   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.398347   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:58.402504 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:58.402521 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:58.428716 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:58.428754 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:00.959753 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:00.970910 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:00.970990 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:01.005870 3219848 cri.go:89] found id: ""
	I1217 12:06:01.005941 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.005958 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:01.005967 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:01.006031 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:01.034724 3219848 cri.go:89] found id: ""
	I1217 12:06:01.034747 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.034756 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:01.034765 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:01.034823 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:01.059798 3219848 cri.go:89] found id: ""
	I1217 12:06:01.059824 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.059836 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:01.059842 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:01.059900 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:01.089347 3219848 cri.go:89] found id: ""
	I1217 12:06:01.089370 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.089378 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:01.089385 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:01.089448 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:01.115166 3219848 cri.go:89] found id: ""
	I1217 12:06:01.115201 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.115211 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:01.115218 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:01.115286 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:01.142081 3219848 cri.go:89] found id: ""
	I1217 12:06:01.142109 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.142118 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:01.142125 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:01.142211 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:01.173172 3219848 cri.go:89] found id: ""
	I1217 12:06:01.173198 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.173208 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:01.173215 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:01.173280 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:01.200453 3219848 cri.go:89] found id: ""
	I1217 12:06:01.200477 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.200486 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:01.200496 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:01.200506 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:01.226189 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:01.226231 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:01.283020 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:01.283101 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:01.360095 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:01.360131 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:01.377017 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:01.377049 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:01.442041 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:01.434467   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.434821   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.436378   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.436785   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.438222   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:01.434467   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.434821   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.436378   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.436785   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.438222   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:03.943920 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:03.955271 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:03.955384 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:03.987078 3219848 cri.go:89] found id: ""
	I1217 12:06:03.987106 3219848 logs.go:282] 0 containers: []
	W1217 12:06:03.987115 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:03.987124 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:03.987185 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:04.020179 3219848 cri.go:89] found id: ""
	I1217 12:06:04.020207 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.020243 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:04.020250 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:04.020328 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:04.049457 3219848 cri.go:89] found id: ""
	I1217 12:06:04.049484 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.049494 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:04.049500 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:04.049565 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:04.077274 3219848 cri.go:89] found id: ""
	I1217 12:06:04.077302 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.077311 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:04.077318 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:04.077386 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:04.108697 3219848 cri.go:89] found id: ""
	I1217 12:06:04.108725 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.108734 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:04.108740 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:04.108800 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:04.133870 3219848 cri.go:89] found id: ""
	I1217 12:06:04.133949 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.133974 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:04.133988 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:04.134075 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:04.158589 3219848 cri.go:89] found id: ""
	I1217 12:06:04.158616 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.158625 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:04.158632 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:04.158705 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:04.182544 3219848 cri.go:89] found id: ""
	I1217 12:06:04.182568 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.182577 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:04.182605 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:04.182630 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:04.198694 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:04.198722 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:04.286551 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:04.273260   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.277107   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.278962   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.279268   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.280776   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:04.273260   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.277107   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.278962   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.279268   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.280776   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:04.286576 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:04.286587 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:04.322177 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:04.322211 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:04.362745 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:04.362774 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:06.922523 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:06.933191 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:06.933262 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:06.962648 3219848 cri.go:89] found id: ""
	I1217 12:06:06.962675 3219848 logs.go:282] 0 containers: []
	W1217 12:06:06.962685 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:06.962692 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:06.962750 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:06.991732 3219848 cri.go:89] found id: ""
	I1217 12:06:06.991757 3219848 logs.go:282] 0 containers: []
	W1217 12:06:06.991765 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:06.991772 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:06.991829 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:07.018557 3219848 cri.go:89] found id: ""
	I1217 12:06:07.018584 3219848 logs.go:282] 0 containers: []
	W1217 12:06:07.018594 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:07.018600 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:07.018659 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:07.044679 3219848 cri.go:89] found id: ""
	I1217 12:06:07.044704 3219848 logs.go:282] 0 containers: []
	W1217 12:06:07.044713 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:07.044720 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:07.044786 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:07.073836 3219848 cri.go:89] found id: ""
	I1217 12:06:07.073905 3219848 logs.go:282] 0 containers: []
	W1217 12:06:07.073930 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:07.073944 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:07.074020 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:07.100945 3219848 cri.go:89] found id: ""
	I1217 12:06:07.100972 3219848 logs.go:282] 0 containers: []
	W1217 12:06:07.100982 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:07.100989 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:07.101094 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:07.125935 3219848 cri.go:89] found id: ""
	I1217 12:06:07.125963 3219848 logs.go:282] 0 containers: []
	W1217 12:06:07.125972 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:07.125978 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:07.126061 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:07.151599 3219848 cri.go:89] found id: ""
	I1217 12:06:07.151624 3219848 logs.go:282] 0 containers: []
	W1217 12:06:07.151633 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:07.151641 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:07.151653 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:07.167414 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:07.167439 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:07.235174 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:07.226345   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.226997   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.228627   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.229347   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.231069   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:07.226345   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.226997   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.228627   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.229347   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.231069   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:07.235246 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:07.235266 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:07.264720 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:07.264754 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:07.349181 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:07.349210 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:09.906484 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:09.917044 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:09.917120 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:09.941939 3219848 cri.go:89] found id: ""
	I1217 12:06:09.942004 3219848 logs.go:282] 0 containers: []
	W1217 12:06:09.942024 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:09.942031 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:09.942088 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:09.966481 3219848 cri.go:89] found id: ""
	I1217 12:06:09.966507 3219848 logs.go:282] 0 containers: []
	W1217 12:06:09.966515 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:09.966523 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:09.966622 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:09.991806 3219848 cri.go:89] found id: ""
	I1217 12:06:09.991830 3219848 logs.go:282] 0 containers: []
	W1217 12:06:09.991839 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:09.991845 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:09.991901 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:10.027713 3219848 cri.go:89] found id: ""
	I1217 12:06:10.027784 3219848 logs.go:282] 0 containers: []
	W1217 12:06:10.027800 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:10.027808 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:10.027874 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:10.060097 3219848 cri.go:89] found id: ""
	I1217 12:06:10.060124 3219848 logs.go:282] 0 containers: []
	W1217 12:06:10.060133 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:10.060140 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:10.060203 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:10.091977 3219848 cri.go:89] found id: ""
	I1217 12:06:10.092002 3219848 logs.go:282] 0 containers: []
	W1217 12:06:10.092010 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:10.092018 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:10.092081 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:10.118481 3219848 cri.go:89] found id: ""
	I1217 12:06:10.118504 3219848 logs.go:282] 0 containers: []
	W1217 12:06:10.118513 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:10.118526 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:10.118586 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:10.145196 3219848 cri.go:89] found id: ""
	I1217 12:06:10.145263 3219848 logs.go:282] 0 containers: []
	W1217 12:06:10.145278 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:10.145288 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:10.145306 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:10.161573 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:10.161603 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:10.227235 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:10.218460   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.219270   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.220964   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.221573   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.223258   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:10.218460   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.219270   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.220964   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.221573   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.223258   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:10.227259 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:10.227273 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:10.253333 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:10.253644 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:10.302209 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:10.302284 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:12.881891 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:12.892449 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:12.892519 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:12.919824 3219848 cri.go:89] found id: ""
	I1217 12:06:12.919848 3219848 logs.go:282] 0 containers: []
	W1217 12:06:12.919856 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:12.919863 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:12.919924 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:12.946684 3219848 cri.go:89] found id: ""
	I1217 12:06:12.946711 3219848 logs.go:282] 0 containers: []
	W1217 12:06:12.946721 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:12.946728 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:12.946808 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:12.970796 3219848 cri.go:89] found id: ""
	I1217 12:06:12.970820 3219848 logs.go:282] 0 containers: []
	W1217 12:06:12.970830 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:12.970837 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:12.970904 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:12.996393 3219848 cri.go:89] found id: ""
	I1217 12:06:12.996459 3219848 logs.go:282] 0 containers: []
	W1217 12:06:12.996469 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:12.996476 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:12.996538 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:13.022560 3219848 cri.go:89] found id: ""
	I1217 12:06:13.022587 3219848 logs.go:282] 0 containers: []
	W1217 12:06:13.022596 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:13.022603 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:13.022664 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:13.050809 3219848 cri.go:89] found id: ""
	I1217 12:06:13.050839 3219848 logs.go:282] 0 containers: []
	W1217 12:06:13.050849 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:13.050856 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:13.050919 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:13.077432 3219848 cri.go:89] found id: ""
	I1217 12:06:13.077460 3219848 logs.go:282] 0 containers: []
	W1217 12:06:13.077469 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:13.077477 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:13.077540 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:13.104029 3219848 cri.go:89] found id: ""
	I1217 12:06:13.104056 3219848 logs.go:282] 0 containers: []
	W1217 12:06:13.104065 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:13.104075 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:13.104086 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:13.162000 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:13.162038 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:13.177865 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:13.177891 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:13.241266 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:13.232767   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.233565   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.235109   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.235417   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.236871   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:13.232767   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.233565   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.235109   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.235417   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.236871   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:13.241289 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:13.241302 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:13.271232 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:13.271269 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:15.839567 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:15.850326 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:15.850396 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:15.875471 3219848 cri.go:89] found id: ""
	I1217 12:06:15.875493 3219848 logs.go:282] 0 containers: []
	W1217 12:06:15.875502 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:15.875509 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:15.875566 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:15.899977 3219848 cri.go:89] found id: ""
	I1217 12:06:15.899998 3219848 logs.go:282] 0 containers: []
	W1217 12:06:15.900007 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:15.900013 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:15.900073 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:15.926093 3219848 cri.go:89] found id: ""
	I1217 12:06:15.926117 3219848 logs.go:282] 0 containers: []
	W1217 12:06:15.926126 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:15.926133 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:15.926193 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:15.951373 3219848 cri.go:89] found id: ""
	I1217 12:06:15.951397 3219848 logs.go:282] 0 containers: []
	W1217 12:06:15.951407 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:15.951414 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:15.951470 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:15.976937 3219848 cri.go:89] found id: ""
	I1217 12:06:15.976963 3219848 logs.go:282] 0 containers: []
	W1217 12:06:15.976972 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:15.976979 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:15.977041 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:16.003518 3219848 cri.go:89] found id: ""
	I1217 12:06:16.003717 3219848 logs.go:282] 0 containers: []
	W1217 12:06:16.003750 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:16.003786 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:16.003901 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:16.032115 3219848 cri.go:89] found id: ""
	I1217 12:06:16.032142 3219848 logs.go:282] 0 containers: []
	W1217 12:06:16.032151 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:16.032159 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:16.032219 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:16.061490 3219848 cri.go:89] found id: ""
	I1217 12:06:16.061517 3219848 logs.go:282] 0 containers: []
	W1217 12:06:16.061526 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:16.061536 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:16.061547 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:16.077146 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:16.077179 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:16.145955 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:16.137946   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.138559   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.140379   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.140910   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.142053   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:16.137946   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.138559   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.140379   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.140910   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.142053   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:16.145981 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:16.145995 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:16.172145 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:16.172180 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:16.206805 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:16.206833 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:18.766689 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:18.777034 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:18.777108 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:18.805815 3219848 cri.go:89] found id: ""
	I1217 12:06:18.805838 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.805847 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:18.805853 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:18.805910 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:18.831468 3219848 cri.go:89] found id: ""
	I1217 12:06:18.831492 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.831501 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:18.831508 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:18.831567 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:18.859309 3219848 cri.go:89] found id: ""
	I1217 12:06:18.859339 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.859349 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:18.859368 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:18.859436 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:18.884524 3219848 cri.go:89] found id: ""
	I1217 12:06:18.884552 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.884561 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:18.884569 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:18.884665 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:18.909522 3219848 cri.go:89] found id: ""
	I1217 12:06:18.909545 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.909554 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:18.909561 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:18.909620 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:18.935126 3219848 cri.go:89] found id: ""
	I1217 12:06:18.935151 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.935161 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:18.935167 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:18.935227 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:18.964480 3219848 cri.go:89] found id: ""
	I1217 12:06:18.964506 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.964516 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:18.964522 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:18.964581 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:18.990408 3219848 cri.go:89] found id: ""
	I1217 12:06:18.990435 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.990444 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:18.990454 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:18.990466 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:19.017937 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:19.017974 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:19.048976 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:19.049004 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:19.108146 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:19.108184 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:19.125457 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:19.125507 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:19.190960 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:19.182754   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.183274   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.185018   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.185416   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.186923   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:19.182754   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.183274   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.185018   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.185416   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.186923   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:21.691321 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:21.702288 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:21.702373 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:21.728533 3219848 cri.go:89] found id: ""
	I1217 12:06:21.728561 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.728571 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:21.728577 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:21.728645 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:21.755298 3219848 cri.go:89] found id: ""
	I1217 12:06:21.755323 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.755333 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:21.755345 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:21.755403 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:21.784470 3219848 cri.go:89] found id: ""
	I1217 12:06:21.784494 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.784503 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:21.784509 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:21.784568 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:21.811503 3219848 cri.go:89] found id: ""
	I1217 12:06:21.811528 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.811538 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:21.811544 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:21.811602 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:21.841147 3219848 cri.go:89] found id: ""
	I1217 12:06:21.841212 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.841227 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:21.841241 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:21.841303 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:21.867736 3219848 cri.go:89] found id: ""
	I1217 12:06:21.867763 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.867773 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:21.867779 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:21.867847 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:21.897039 3219848 cri.go:89] found id: ""
	I1217 12:06:21.897104 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.897121 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:21.897128 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:21.897187 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:21.922398 3219848 cri.go:89] found id: ""
	I1217 12:06:21.922420 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.922429 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:21.922438 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:21.922449 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:21.980203 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:21.980241 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:21.996482 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:21.996513 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:22.074426 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:22.061574   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.062326   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.064118   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.064738   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.070487   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:22.061574   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.062326   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.064118   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.064738   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.070487   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:22.074474 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:22.074488 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:22.101174 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:22.101210 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:24.630003 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:24.640702 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:24.640773 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:24.666366 3219848 cri.go:89] found id: ""
	I1217 12:06:24.666390 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.666399 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:24.666408 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:24.666465 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:24.693372 3219848 cri.go:89] found id: ""
	I1217 12:06:24.693398 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.693407 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:24.693413 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:24.693478 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:24.723159 3219848 cri.go:89] found id: ""
	I1217 12:06:24.723181 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.723190 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:24.723197 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:24.723264 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:24.747933 3219848 cri.go:89] found id: ""
	I1217 12:06:24.747960 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.747969 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:24.747976 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:24.748044 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:24.774083 3219848 cri.go:89] found id: ""
	I1217 12:06:24.774105 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.774113 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:24.774120 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:24.774186 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:24.808050 3219848 cri.go:89] found id: ""
	I1217 12:06:24.808076 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.808085 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:24.808092 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:24.808200 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:24.833993 3219848 cri.go:89] found id: ""
	I1217 12:06:24.834070 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.834085 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:24.834093 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:24.834153 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:24.860654 3219848 cri.go:89] found id: ""
	I1217 12:06:24.860679 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.860688 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:24.860697 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:24.860708 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:24.917182 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:24.917265 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:24.933462 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:24.933491 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:25.002903 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:24.992978   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.993789   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.995410   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.996068   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.997870   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:24.992978   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.993789   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.995410   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.996068   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.997870   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:25.002927 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:25.002960 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:25.031774 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:25.031809 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:27.560620 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:27.575695 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:27.575766 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:27.603395 3219848 cri.go:89] found id: ""
	I1217 12:06:27.603421 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.603430 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:27.603436 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:27.603498 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:27.628716 3219848 cri.go:89] found id: ""
	I1217 12:06:27.628739 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.628747 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:27.628754 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:27.628810 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:27.653566 3219848 cri.go:89] found id: ""
	I1217 12:06:27.653629 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.653653 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:27.653679 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:27.653756 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:27.679125 3219848 cri.go:89] found id: ""
	I1217 12:06:27.679150 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.679159 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:27.679166 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:27.679245 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:27.705566 3219848 cri.go:89] found id: ""
	I1217 12:06:27.705632 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.705656 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:27.705677 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:27.705762 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:27.730473 3219848 cri.go:89] found id: ""
	I1217 12:06:27.730541 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.730556 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:27.730564 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:27.730639 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:27.755451 3219848 cri.go:89] found id: ""
	I1217 12:06:27.755476 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.755485 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:27.755492 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:27.755552 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:27.783637 3219848 cri.go:89] found id: ""
	I1217 12:06:27.783663 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.783673 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:27.783682 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:27.783693 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:27.815668 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:27.815707 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:27.846761 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:27.846788 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:27.903961 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:27.903992 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:27.920251 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:27.920285 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:27.989986 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:27.982512   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.983471   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.984453   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.984910   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.985983   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:27.982512   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.983471   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.984453   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.984910   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.985983   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:30.490267 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:30.501854 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:30.501936 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:30.577313 3219848 cri.go:89] found id: ""
	I1217 12:06:30.577342 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.577352 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:30.577376 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:30.577460 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:30.606634 3219848 cri.go:89] found id: ""
	I1217 12:06:30.606660 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.606670 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:30.606676 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:30.606744 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:30.632310 3219848 cri.go:89] found id: ""
	I1217 12:06:30.632342 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.632351 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:30.632358 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:30.632473 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:30.658929 3219848 cri.go:89] found id: ""
	I1217 12:06:30.658960 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.658970 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:30.658976 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:30.659036 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:30.690494 3219848 cri.go:89] found id: ""
	I1217 12:06:30.690519 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.690529 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:30.690535 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:30.690598 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:30.716270 3219848 cri.go:89] found id: ""
	I1217 12:06:30.716295 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.716305 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:30.716312 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:30.716396 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:30.743684 3219848 cri.go:89] found id: ""
	I1217 12:06:30.743720 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.743738 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:30.743745 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:30.743823 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:30.771862 3219848 cri.go:89] found id: ""
	I1217 12:06:30.771895 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.771905 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:30.771915 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:30.771928 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:30.829962 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:30.829997 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:30.846244 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:30.846269 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:30.910789 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:30.902355   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.903184   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.904920   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.905376   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.906932   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:30.902355   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.903184   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.904920   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.905376   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.906932   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:30.910812 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:30.910825 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:30.937515 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:30.937552 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:33.467661 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:33.479263 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:33.479335 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:33.531382 3219848 cri.go:89] found id: ""
	I1217 12:06:33.531405 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.531414 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:33.531420 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:33.531491 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:33.586607 3219848 cri.go:89] found id: ""
	I1217 12:06:33.586628 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.586637 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:33.586651 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:33.586708 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:33.622903 3219848 cri.go:89] found id: ""
	I1217 12:06:33.622925 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.622934 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:33.622940 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:33.623012 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:33.652846 3219848 cri.go:89] found id: ""
	I1217 12:06:33.652874 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.652882 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:33.652889 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:33.652946 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:33.677852 3219848 cri.go:89] found id: ""
	I1217 12:06:33.677877 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.677886 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:33.677893 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:33.677972 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:33.706815 3219848 cri.go:89] found id: ""
	I1217 12:06:33.706840 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.706849 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:33.706856 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:33.706918 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:33.736780 3219848 cri.go:89] found id: ""
	I1217 12:06:33.736806 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.736816 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:33.736822 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:33.736880 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:33.761376 3219848 cri.go:89] found id: ""
	I1217 12:06:33.761414 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.761424 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:33.761433 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:33.761445 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:33.819076 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:33.819113 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:33.835282 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:33.835311 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:33.903109 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:33.894518   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.895131   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.896856   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.897422   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.899092   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:33.894518   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.895131   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.896856   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.897422   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.899092   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:33.903181 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:33.903206 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:33.935593 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:33.935636 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:36.469816 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:36.480311 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:36.480394 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:36.522000 3219848 cri.go:89] found id: ""
	I1217 12:06:36.522026 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.522035 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:36.522041 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:36.522098 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:36.584783 3219848 cri.go:89] found id: ""
	I1217 12:06:36.584811 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.584819 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:36.584825 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:36.584885 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:36.614443 3219848 cri.go:89] found id: ""
	I1217 12:06:36.614469 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.614478 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:36.614484 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:36.614543 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:36.642952 3219848 cri.go:89] found id: ""
	I1217 12:06:36.642974 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.642982 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:36.642989 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:36.643047 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:36.667989 3219848 cri.go:89] found id: ""
	I1217 12:06:36.668011 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.668019 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:36.668025 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:36.668109 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:36.696974 3219848 cri.go:89] found id: ""
	I1217 12:06:36.697049 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.697062 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:36.697096 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:36.697191 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:36.723789 3219848 cri.go:89] found id: ""
	I1217 12:06:36.723812 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.723821 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:36.723828 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:36.723885 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:36.748007 3219848 cri.go:89] found id: ""
	I1217 12:06:36.748078 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.748102 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:36.748126 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:36.748167 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:36.778526 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:36.778554 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:36.834614 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:36.834648 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:36.852247 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:36.852276 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:36.920099 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:36.911022   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.911723   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.913310   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.913643   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.915127   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:36.911022   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.911723   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.913310   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.913643   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.915127   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:36.920123 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:36.920135 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:39.447091 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:39.457670 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:39.457740 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:39.481236 3219848 cri.go:89] found id: ""
	I1217 12:06:39.481260 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.481269 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:39.481276 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:39.481333 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:39.539773 3219848 cri.go:89] found id: ""
	I1217 12:06:39.539800 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.539810 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:39.539817 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:39.539879 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:39.586024 3219848 cri.go:89] found id: ""
	I1217 12:06:39.586053 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.586069 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:39.586075 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:39.586133 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:39.614247 3219848 cri.go:89] found id: ""
	I1217 12:06:39.614272 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.614281 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:39.614288 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:39.614348 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:39.639817 3219848 cri.go:89] found id: ""
	I1217 12:06:39.639840 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.639848 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:39.639855 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:39.639910 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:39.663356 3219848 cri.go:89] found id: ""
	I1217 12:06:39.663382 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.663390 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:39.663397 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:39.663457 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:39.692611 3219848 cri.go:89] found id: ""
	I1217 12:06:39.692638 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.692647 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:39.692654 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:39.692714 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:39.718640 3219848 cri.go:89] found id: ""
	I1217 12:06:39.718665 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.718674 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:39.718686 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:39.718698 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:39.743735 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:39.743776 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:39.776101 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:39.776130 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:39.839871 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:39.839912 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:39.856925 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:39.856956 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:39.927715 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:39.920216   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.920790   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.921971   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.922477   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.923988   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:39.920216   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.920790   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.921971   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.922477   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.923988   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:42.428378 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:42.439785 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:42.439861 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:42.467826 3219848 cri.go:89] found id: ""
	I1217 12:06:42.467849 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.467857 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:42.467864 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:42.467928 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:42.492505 3219848 cri.go:89] found id: ""
	I1217 12:06:42.492533 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.492542 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:42.492549 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:42.492607 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:42.562039 3219848 cri.go:89] found id: ""
	I1217 12:06:42.562062 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.562071 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:42.562077 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:42.562147 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:42.600111 3219848 cri.go:89] found id: ""
	I1217 12:06:42.600139 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.600148 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:42.600155 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:42.600218 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:42.631003 3219848 cri.go:89] found id: ""
	I1217 12:06:42.631026 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.631035 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:42.631042 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:42.631101 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:42.655257 3219848 cri.go:89] found id: ""
	I1217 12:06:42.655283 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.655292 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:42.655305 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:42.655366 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:42.681199 3219848 cri.go:89] found id: ""
	I1217 12:06:42.681220 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.681229 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:42.681236 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:42.681295 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:42.706511 3219848 cri.go:89] found id: ""
	I1217 12:06:42.706535 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.706544 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:42.706553 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:42.706565 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:42.762839 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:42.762875 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:42.779904 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:42.779936 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:42.849079 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:42.840586   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.841187   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.842724   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.843182   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.844615   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:42.840586   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.841187   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.842724   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.843182   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.844615   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:42.849103 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:42.849114 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:42.874488 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:42.874529 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:45.406478 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:45.417919 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:45.417989 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:45.445578 3219848 cri.go:89] found id: ""
	I1217 12:06:45.445614 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.445624 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:45.445632 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:45.445694 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:45.477590 3219848 cri.go:89] found id: ""
	I1217 12:06:45.477674 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.477699 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:45.477735 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:45.477831 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:45.515743 3219848 cri.go:89] found id: ""
	I1217 12:06:45.515765 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.515774 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:45.515781 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:45.515840 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:45.550588 3219848 cri.go:89] found id: ""
	I1217 12:06:45.550610 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.550619 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:45.550626 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:45.550684 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:45.595764 3219848 cri.go:89] found id: ""
	I1217 12:06:45.595785 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.595794 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:45.595802 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:45.595862 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:45.621971 3219848 cri.go:89] found id: ""
	I1217 12:06:45.621994 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.622003 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:45.622010 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:45.622077 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:45.648142 3219848 cri.go:89] found id: ""
	I1217 12:06:45.648176 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.648186 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:45.648193 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:45.648266 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:45.677328 3219848 cri.go:89] found id: ""
	I1217 12:06:45.677364 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.677373 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:45.677383 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:45.677401 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:45.750976 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:45.739342   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.739999   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.744563   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.745136   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.746629   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:45.739342   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.739999   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.744563   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.745136   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.746629   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:45.750998 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:45.751012 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:45.777019 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:45.777056 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:45.805927 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:45.805957 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:45.861380 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:45.861414 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:48.377400 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:48.388086 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:48.388158 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:48.412282 3219848 cri.go:89] found id: ""
	I1217 12:06:48.412305 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.412313 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:48.412320 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:48.412377 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:48.437811 3219848 cri.go:89] found id: ""
	I1217 12:06:48.437846 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.437856 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:48.437879 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:48.437953 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:48.462517 3219848 cri.go:89] found id: ""
	I1217 12:06:48.462539 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.462547 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:48.462557 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:48.462615 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:48.486379 3219848 cri.go:89] found id: ""
	I1217 12:06:48.486402 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.486411 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:48.486418 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:48.486475 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:48.582544 3219848 cri.go:89] found id: ""
	I1217 12:06:48.582569 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.582578 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:48.582585 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:48.582691 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:48.612954 3219848 cri.go:89] found id: ""
	I1217 12:06:48.612980 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.612990 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:48.612997 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:48.613058 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:48.638059 3219848 cri.go:89] found id: ""
	I1217 12:06:48.638083 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.638091 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:48.638098 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:48.638160 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:48.663252 3219848 cri.go:89] found id: ""
	I1217 12:06:48.663278 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.663288 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:48.663298 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:48.663308 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:48.719388 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:48.719422 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:48.735198 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:48.735227 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:48.801972 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:48.793731   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.794319   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.795999   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.796684   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.798278   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:48.793731   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.794319   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.795999   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.796684   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.798278   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:48.801995 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:48.802008 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:48.827753 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:48.827787 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:51.362888 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:51.373695 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:51.373779 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:51.399521 3219848 cri.go:89] found id: ""
	I1217 12:06:51.399547 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.399556 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:51.399563 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:51.399620 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:51.425074 3219848 cri.go:89] found id: ""
	I1217 12:06:51.425140 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.425154 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:51.425161 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:51.425219 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:51.449708 3219848 cri.go:89] found id: ""
	I1217 12:06:51.449731 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.449740 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:51.449746 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:51.449818 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:51.478561 3219848 cri.go:89] found id: ""
	I1217 12:06:51.478585 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.478594 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:51.478601 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:51.478687 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:51.520104 3219848 cri.go:89] found id: ""
	I1217 12:06:51.520142 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.520152 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:51.520159 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:51.520227 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:51.589783 3219848 cri.go:89] found id: ""
	I1217 12:06:51.589826 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.589836 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:51.589843 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:51.589914 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:51.616852 3219848 cri.go:89] found id: ""
	I1217 12:06:51.616888 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.616898 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:51.616904 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:51.616967 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:51.643529 3219848 cri.go:89] found id: ""
	I1217 12:06:51.643609 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.643632 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:51.643661 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:51.643706 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:51.707671 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:51.699393   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.700178   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.701673   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.702158   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.703665   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:51.699393   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.700178   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.701673   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.702158   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.703665   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:51.707744 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:51.707772 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:51.733586 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:51.733622 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:51.763883 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:51.763912 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:51.818754 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:51.818788 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:54.336140 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:54.350294 3219848 out.go:203] 
	W1217 12:06:54.353246 3219848 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1217 12:06:54.353303 3219848 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1217 12:06:54.353317 3219848 out.go:285] * Related issues:
	W1217 12:06:54.353339 3219848 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1217 12:06:54.353354 3219848 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1217 12:06:54.356285 3219848 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.201958753Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.201978051Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202016040Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202033845Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202043395Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202054242Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202063719Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202075034Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202091206Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202122163Z" level=info msg="Connect containerd service"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202376764Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202915340Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.221759735Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.221831644Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.221883507Z" level=info msg="Start subscribing containerd event"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.221927577Z" level=info msg="Start recovering state"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.261428629Z" level=info msg="Start event monitor"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.261488361Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.261499979Z" level=info msg="Start streaming server"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.261510449Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.261519753Z" level=info msg="runtime interface starting up..."
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.261526965Z" level=info msg="starting plugins..."
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.261557275Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.261851842Z" level=info msg="containerd successfully booted in 0.083557s"
	Dec 17 12:00:50 newest-cni-669680 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:57.480721   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:57.481333   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:57.482947   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:57.483430   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:57.485040   13430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +26.532481] overlayfs: idmapped layers are currently not supported
	[Dec17 09:26] overlayfs: idmapped layers are currently not supported
	[Dec17 09:27] overlayfs: idmapped layers are currently not supported
	[Dec17 09:29] overlayfs: idmapped layers are currently not supported
	[Dec17 09:31] overlayfs: idmapped layers are currently not supported
	[Dec17 09:41] overlayfs: idmapped layers are currently not supported
	[Dec17 09:43] overlayfs: idmapped layers are currently not supported
	[Dec17 09:44] overlayfs: idmapped layers are currently not supported
	[  +5.066669] overlayfs: idmapped layers are currently not supported
	[ +38.827173] overlayfs: idmapped layers are currently not supported
	[Dec17 09:45] overlayfs: idmapped layers are currently not supported
	[Dec17 09:46] overlayfs: idmapped layers are currently not supported
	[Dec17 09:48] overlayfs: idmapped layers are currently not supported
	[  +5.468161] overlayfs: idmapped layers are currently not supported
	[Dec17 09:49] overlayfs: idmapped layers are currently not supported
	[  +4.263444] overlayfs: idmapped layers are currently not supported
	[Dec17 09:50] overlayfs: idmapped layers are currently not supported
	[Dec17 10:07] overlayfs: idmapped layers are currently not supported
	[Dec17 10:08] overlayfs: idmapped layers are currently not supported
	[Dec17 10:10] overlayfs: idmapped layers are currently not supported
	[Dec17 10:11] overlayfs: idmapped layers are currently not supported
	[Dec17 10:13] overlayfs: idmapped layers are currently not supported
	[Dec17 10:15] overlayfs: idmapped layers are currently not supported
	[Dec17 10:16] overlayfs: idmapped layers are currently not supported
	[Dec17 10:21] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 12:06:57 up 17:49,  0 user,  load average: 0.73, 0.75, 1.11
	Linux newest-cni-669680 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 12:06:54 newest-cni-669680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:06:54 newest-cni-669680 kubelet[13309]: E1217 12:06:54.638063   13309 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:06:54 newest-cni-669680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:06:54 newest-cni-669680 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:06:55 newest-cni-669680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 17 12:06:55 newest-cni-669680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:06:55 newest-cni-669680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:06:55 newest-cni-669680 kubelet[13314]: E1217 12:06:55.377689   13314 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:06:55 newest-cni-669680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:06:55 newest-cni-669680 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:06:56 newest-cni-669680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 17 12:06:56 newest-cni-669680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:06:56 newest-cni-669680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:06:56 newest-cni-669680 kubelet[13332]: E1217 12:06:56.115741   13332 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:06:56 newest-cni-669680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:06:56 newest-cni-669680 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:06:56 newest-cni-669680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 487.
	Dec 17 12:06:56 newest-cni-669680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:06:56 newest-cni-669680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:06:56 newest-cni-669680 kubelet[13340]: E1217 12:06:56.852206   13340 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:06:56 newest-cni-669680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:06:56 newest-cni-669680 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:06:57 newest-cni-669680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 488.
	Dec 17 12:06:57 newest-cni-669680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:06:57 newest-cni-669680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-669680 -n newest-cni-669680
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-669680 -n newest-cni-669680: exit status 2 (427.334015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-669680" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (375.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (541.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:03:36.152516 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:03:50.515352 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/old-k8s-version-112124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:04:03.818579 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/default-k8s-diff-port-224095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:04:28.206258 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:05:13.579325 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/old-k8s-version-112124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:05:43.083619 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1217 12:08:03.979295 2924574 config.go:182] Loaded profile config "auto-348887": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:08:36.152711 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:08:50.515540 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/old-k8s-version-112124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:09:03.819165 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/default-k8s-diff-port-224095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:09:11.284797 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:09:28.205384 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1217 12:09:35.728563 2924574 config.go:182] Loaded profile config "kindnet-348887": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:09:59.225598 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:10:26.884288 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/default-k8s-diff-port-224095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:10:43.082751 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-118262 -n no-preload-118262
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-118262 -n no-preload-118262: exit status 2 (373.69162ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-118262" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-118262
helpers_test.go:244: (dbg) docker inspect no-preload-118262:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362",
	        "Created": "2025-12-17T11:45:23.889791979Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3213113,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:55:54.36927291Z",
	            "FinishedAt": "2025-12-17T11:55:53.009633374Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/hostname",
	        "HostsPath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/hosts",
	        "LogPath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362-json.log",
	        "Name": "/no-preload-118262",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-118262:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-118262",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362",
	                "LowerDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82-init/diff:/var/lib/docker/overlay2/aa1c3cb837db05afa9c265c464cc269fa9c11658f422c1c8858e1287ac952f12/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-118262",
	                "Source": "/var/lib/docker/volumes/no-preload-118262/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-118262",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-118262",
	                "name.minikube.sigs.k8s.io": "no-preload-118262",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a5bb1af38cbf7e52f627da4de2cc21445576f9ee9ac16469472822e1e4e3c56f",
	            "SandboxKey": "/var/run/docker/netns/a5bb1af38cbf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36048"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36049"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36052"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36051"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-118262": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:fb:41:14:2f:52",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3227851744df2bdac9c367dc789ddfe2892f877b7b9b947cdcd81cb2897c4ba1",
	                    "EndpointID": "c35288f197473390678d887f2fedc1b13457164e1aa2e715d8bd350b76e059bf",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-118262",
	                        "4578079103f7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-118262 -n no-preload-118262
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-118262 -n no-preload-118262: exit status 2 (315.025684ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-118262 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                     ARGS                                                                     │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-348887 sudo systemctl status kubelet --all --full --no-pager                                                                      │ kindnet-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:09 UTC │ 17 Dec 25 12:09 UTC │
	│ ssh     │ -p kindnet-348887 sudo systemctl cat kubelet --no-pager                                                                                      │ kindnet-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:09 UTC │ 17 Dec 25 12:09 UTC │
	│ ssh     │ -p kindnet-348887 sudo journalctl -xeu kubelet --all --full --no-pager                                                                       │ kindnet-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:09 UTC │ 17 Dec 25 12:09 UTC │
	│ ssh     │ -p kindnet-348887 sudo cat /etc/kubernetes/kubelet.conf                                                                                      │ kindnet-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:09 UTC │ 17 Dec 25 12:09 UTC │
	│ ssh     │ -p kindnet-348887 sudo cat /var/lib/kubelet/config.yaml                                                                                      │ kindnet-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:09 UTC │ 17 Dec 25 12:09 UTC │
	│ ssh     │ -p kindnet-348887 sudo systemctl status docker --all --full --no-pager                                                                       │ kindnet-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:09 UTC │                     │
	│ ssh     │ -p kindnet-348887 sudo systemctl cat docker --no-pager                                                                                       │ kindnet-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:09 UTC │ 17 Dec 25 12:09 UTC │
	│ ssh     │ -p kindnet-348887 sudo cat /etc/docker/daemon.json                                                                                           │ kindnet-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:09 UTC │                     │
	│ ssh     │ -p kindnet-348887 sudo docker system info                                                                                                    │ kindnet-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:09 UTC │                     │
	│ ssh     │ -p kindnet-348887 sudo systemctl status cri-docker --all --full --no-pager                                                                   │ kindnet-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:09 UTC │                     │
	│ ssh     │ -p kindnet-348887 sudo systemctl cat cri-docker --no-pager                                                                                   │ kindnet-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:09 UTC │ 17 Dec 25 12:09 UTC │
	│ ssh     │ -p kindnet-348887 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                              │ kindnet-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:09 UTC │                     │
	│ ssh     │ -p kindnet-348887 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                        │ kindnet-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:10 UTC │ 17 Dec 25 12:10 UTC │
	│ ssh     │ -p kindnet-348887 sudo cri-dockerd --version                                                                                                 │ kindnet-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:10 UTC │ 17 Dec 25 12:10 UTC │
	│ ssh     │ -p kindnet-348887 sudo systemctl status containerd --all --full --no-pager                                                                   │ kindnet-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:10 UTC │ 17 Dec 25 12:10 UTC │
	│ ssh     │ -p kindnet-348887 sudo systemctl cat containerd --no-pager                                                                                   │ kindnet-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:10 UTC │ 17 Dec 25 12:10 UTC │
	│ ssh     │ -p kindnet-348887 sudo cat /lib/systemd/system/containerd.service                                                                            │ kindnet-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:10 UTC │ 17 Dec 25 12:10 UTC │
	│ ssh     │ -p kindnet-348887 sudo cat /etc/containerd/config.toml                                                                                       │ kindnet-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:10 UTC │ 17 Dec 25 12:10 UTC │
	│ ssh     │ -p kindnet-348887 sudo containerd config dump                                                                                                │ kindnet-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:10 UTC │ 17 Dec 25 12:10 UTC │
	│ ssh     │ -p kindnet-348887 sudo systemctl status crio --all --full --no-pager                                                                         │ kindnet-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:10 UTC │                     │
	│ ssh     │ -p kindnet-348887 sudo systemctl cat crio --no-pager                                                                                         │ kindnet-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:10 UTC │ 17 Dec 25 12:10 UTC │
	│ ssh     │ -p kindnet-348887 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                               │ kindnet-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:10 UTC │ 17 Dec 25 12:10 UTC │
	│ ssh     │ -p kindnet-348887 sudo crio config                                                                                                           │ kindnet-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:10 UTC │ 17 Dec 25 12:10 UTC │
	│ delete  │ -p kindnet-348887                                                                                                                            │ kindnet-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:10 UTC │ 17 Dec 25 12:10 UTC │
	│ start   │ -p calico-348887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd │ calico-348887  │ jenkins │ v1.37.0 │ 17 Dec 25 12:10 UTC │ 17 Dec 25 12:11 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 12:10:06
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 12:10:06.669021 3251787 out.go:360] Setting OutFile to fd 1 ...
	I1217 12:10:06.669221 3251787 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:10:06.669258 3251787 out.go:374] Setting ErrFile to fd 2...
	I1217 12:10:06.669288 3251787 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:10:06.669585 3251787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 12:10:06.670059 3251787 out.go:368] Setting JSON to false
	I1217 12:10:06.670966 3251787 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":64357,"bootTime":1765909050,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 12:10:06.671071 3251787 start.go:143] virtualization:  
	I1217 12:10:06.672815 3251787 out.go:179] * [calico-348887] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 12:10:06.674335 3251787 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 12:10:06.674466 3251787 notify.go:221] Checking for updates...
	I1217 12:10:06.676947 3251787 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 12:10:06.678430 3251787 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 12:10:06.680194 3251787 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 12:10:06.681636 3251787 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 12:10:06.683056 3251787 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 12:10:06.684920 3251787 config.go:182] Loaded profile config "no-preload-118262": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 12:10:06.685381 3251787 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 12:10:06.708139 3251787 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 12:10:06.708271 3251787 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 12:10:06.776831 3251787 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 12:10:06.766776874 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 12:10:06.776952 3251787 docker.go:319] overlay module found
	I1217 12:10:06.778490 3251787 out.go:179] * Using the docker driver based on user configuration
	I1217 12:10:06.779634 3251787 start.go:309] selected driver: docker
	I1217 12:10:06.779655 3251787 start.go:927] validating driver "docker" against <nil>
	I1217 12:10:06.779674 3251787 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 12:10:06.780481 3251787 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 12:10:06.831469 3251787 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 12:10:06.821898367 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 12:10:06.831621 3251787 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 12:10:06.831876 3251787 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 12:10:06.833202 3251787 out.go:179] * Using Docker driver with root privileges
	I1217 12:10:06.834491 3251787 cni.go:84] Creating CNI manager for "calico"
	I1217 12:10:06.834521 3251787 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1217 12:10:06.834644 3251787 start.go:353] cluster config:
	{Name:calico-348887 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-348887 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:10:06.836341 3251787 out.go:179] * Starting "calico-348887" primary control-plane node in "calico-348887" cluster
	I1217 12:10:06.837595 3251787 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 12:10:06.838820 3251787 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 12:10:06.839946 3251787 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1217 12:10:06.840004 3251787 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4
	I1217 12:10:06.840019 3251787 cache.go:65] Caching tarball of preloaded images
	I1217 12:10:06.840041 3251787 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 12:10:06.840099 3251787 preload.go:238] Found /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 12:10:06.840110 3251787 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on containerd
	I1217 12:10:06.840222 3251787 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/config.json ...
	I1217 12:10:06.840240 3251787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/config.json: {Name:mk036effe2ec7b13a9cb13ac87a070522484f0c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:10:06.860029 3251787 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 12:10:06.860050 3251787 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 12:10:06.860070 3251787 cache.go:243] Successfully downloaded all kic artifacts
	I1217 12:10:06.860104 3251787 start.go:360] acquireMachinesLock for calico-348887: {Name:mk43b2e551ca75e6b02cb1a3281eb59c332d2a6e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 12:10:06.860226 3251787 start.go:364] duration metric: took 101.676µs to acquireMachinesLock for "calico-348887"
	I1217 12:10:06.860258 3251787 start.go:93] Provisioning new machine with config: &{Name:calico-348887 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-348887 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 12:10:06.860332 3251787 start.go:125] createHost starting for "" (driver="docker")
	I1217 12:10:06.862191 3251787 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 12:10:06.862420 3251787 start.go:159] libmachine.API.Create for "calico-348887" (driver="docker")
	I1217 12:10:06.862454 3251787 client.go:173] LocalClient.Create starting
	I1217 12:10:06.862515 3251787 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem
	I1217 12:10:06.862548 3251787 main.go:143] libmachine: Decoding PEM data...
	I1217 12:10:06.862567 3251787 main.go:143] libmachine: Parsing certificate...
	I1217 12:10:06.862625 3251787 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem
	I1217 12:10:06.862645 3251787 main.go:143] libmachine: Decoding PEM data...
	I1217 12:10:06.862661 3251787 main.go:143] libmachine: Parsing certificate...
	I1217 12:10:06.863000 3251787 cli_runner.go:164] Run: docker network inspect calico-348887 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 12:10:06.879412 3251787 cli_runner.go:211] docker network inspect calico-348887 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 12:10:06.879502 3251787 network_create.go:284] running [docker network inspect calico-348887] to gather additional debugging logs...
	I1217 12:10:06.879523 3251787 cli_runner.go:164] Run: docker network inspect calico-348887
	W1217 12:10:06.896390 3251787 cli_runner.go:211] docker network inspect calico-348887 returned with exit code 1
	I1217 12:10:06.896457 3251787 network_create.go:287] error running [docker network inspect calico-348887]: docker network inspect calico-348887: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-348887 not found
	I1217 12:10:06.896474 3251787 network_create.go:289] output of [docker network inspect calico-348887]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-348887 not found
	
	** /stderr **
	I1217 12:10:06.896583 3251787 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 12:10:06.912804 3251787 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f429477a79c4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:ea:a9:f2:52:01} reservation:<nil>}
	I1217 12:10:06.913139 3251787 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e0545776686c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:70:9e:49:ed:7d} reservation:<nil>}
	I1217 12:10:06.913532 3251787 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-279becfad84b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:b7:62:6e:a9:ee} reservation:<nil>}
	I1217 12:10:06.913956 3251787 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a14e20}
	I1217 12:10:06.913979 3251787 network_create.go:124] attempt to create docker network calico-348887 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1217 12:10:06.914044 3251787 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-348887 calico-348887
	I1217 12:10:06.969349 3251787 network_create.go:108] docker network calico-348887 192.168.76.0/24 created
	I1217 12:10:06.969384 3251787 kic.go:121] calculated static IP "192.168.76.2" for the "calico-348887" container
	I1217 12:10:06.969475 3251787 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 12:10:06.986325 3251787 cli_runner.go:164] Run: docker volume create calico-348887 --label name.minikube.sigs.k8s.io=calico-348887 --label created_by.minikube.sigs.k8s.io=true
	I1217 12:10:07.004239 3251787 oci.go:103] Successfully created a docker volume calico-348887
	I1217 12:10:07.004368 3251787 cli_runner.go:164] Run: docker run --rm --name calico-348887-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-348887 --entrypoint /usr/bin/test -v calico-348887:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 12:10:07.496605 3251787 oci.go:107] Successfully prepared a docker volume calico-348887
	I1217 12:10:07.496678 3251787 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1217 12:10:07.496694 3251787 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 12:10:07.496765 3251787 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v calico-348887:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 12:10:11.547286 3251787 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v calico-348887:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.050480532s)
	I1217 12:10:11.547319 3251787 kic.go:203] duration metric: took 4.050621731s to extract preloaded images to volume ...
	W1217 12:10:11.547467 3251787 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 12:10:11.547580 3251787 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 12:10:11.622451 3251787 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-348887 --name calico-348887 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-348887 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-348887 --network calico-348887 --ip 192.168.76.2 --volume calico-348887:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 12:10:11.941991 3251787 cli_runner.go:164] Run: docker container inspect calico-348887 --format={{.State.Running}}
	I1217 12:10:11.966829 3251787 cli_runner.go:164] Run: docker container inspect calico-348887 --format={{.State.Status}}
	I1217 12:10:11.989472 3251787 cli_runner.go:164] Run: docker exec calico-348887 stat /var/lib/dpkg/alternatives/iptables
	I1217 12:10:12.048472 3251787 oci.go:144] the created container "calico-348887" has a running status.
	I1217 12:10:12.048534 3251787 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/calico-348887/id_ed25519...
	I1217 12:10:12.055424 3251787 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/calico-348887/id_ed25519.pub --> /home/docker/.ssh/authorized_keys (81 bytes)
	I1217 12:10:12.080129 3251787 cli_runner.go:164] Run: docker container inspect calico-348887 --format={{.State.Status}}
	I1217 12:10:12.101018 3251787 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 12:10:12.101036 3251787 kic_runner.go:114] Args: [docker exec --privileged calico-348887 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 12:10:12.154288 3251787 cli_runner.go:164] Run: docker container inspect calico-348887 --format={{.State.Status}}
	I1217 12:10:12.177840 3251787 machine.go:94] provisionDockerMachine start ...
	I1217 12:10:12.177935 3251787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-348887
	I1217 12:10:12.209358 3251787 main.go:143] libmachine: Using SSH client type: native
	I1217 12:10:12.209528 3251787 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36068 <nil> <nil>}
	I1217 12:10:12.209537 3251787 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 12:10:12.210308 3251787 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42546->127.0.0.1:36068: read: connection reset by peer
	I1217 12:10:15.344088 3251787 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-348887
	
	I1217 12:10:15.344115 3251787 ubuntu.go:182] provisioning hostname "calico-348887"
	I1217 12:10:15.344241 3251787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-348887
	I1217 12:10:15.363868 3251787 main.go:143] libmachine: Using SSH client type: native
	I1217 12:10:15.363987 3251787 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36068 <nil> <nil>}
	I1217 12:10:15.363999 3251787 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-348887 && echo "calico-348887" | sudo tee /etc/hostname
	I1217 12:10:15.514595 3251787 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-348887
	
	I1217 12:10:15.514682 3251787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-348887
	I1217 12:10:15.540595 3251787 main.go:143] libmachine: Using SSH client type: native
	I1217 12:10:15.540719 3251787 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36068 <nil> <nil>}
	I1217 12:10:15.540736 3251787 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-348887' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-348887/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-348887' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 12:10:15.672869 3251787 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 12:10:15.672894 3251787 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 12:10:15.672912 3251787 ubuntu.go:190] setting up certificates
	I1217 12:10:15.672922 3251787 provision.go:84] configureAuth start
	I1217 12:10:15.672993 3251787 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-348887
	I1217 12:10:15.691290 3251787 provision.go:143] copyHostCerts
	I1217 12:10:15.691369 3251787 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 12:10:15.691384 3251787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 12:10:15.691465 3251787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 12:10:15.691566 3251787 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 12:10:15.691578 3251787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 12:10:15.691605 3251787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 12:10:15.691665 3251787 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 12:10:15.691674 3251787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 12:10:15.691698 3251787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 12:10:15.691747 3251787 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.calico-348887 san=[127.0.0.1 192.168.76.2 calico-348887 localhost minikube]
	I1217 12:10:15.827660 3251787 provision.go:177] copyRemoteCerts
	I1217 12:10:15.827733 3251787 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 12:10:15.827781 3251787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-348887
	I1217 12:10:15.845273 3251787 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36068 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/calico-348887/id_ed25519 Username:docker}
	I1217 12:10:15.940466 3251787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 12:10:15.958346 3251787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 12:10:15.976411 3251787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 12:10:15.994810 3251787 provision.go:87] duration metric: took 321.856191ms to configureAuth
	I1217 12:10:15.994838 3251787 ubuntu.go:206] setting minikube options for container-runtime
	I1217 12:10:15.995017 3251787 config.go:182] Loaded profile config "calico-348887": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1217 12:10:15.995033 3251787 machine.go:97] duration metric: took 3.817172412s to provisionDockerMachine
	I1217 12:10:15.995041 3251787 client.go:176] duration metric: took 9.132578605s to LocalClient.Create
	I1217 12:10:15.995065 3251787 start.go:167] duration metric: took 9.132646411s to libmachine.API.Create "calico-348887"
	I1217 12:10:15.995076 3251787 start.go:293] postStartSetup for "calico-348887" (driver="docker")
	I1217 12:10:15.995084 3251787 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 12:10:15.995142 3251787 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 12:10:15.995191 3251787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-348887
	I1217 12:10:16.014864 3251787 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36068 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/calico-348887/id_ed25519 Username:docker}
	I1217 12:10:16.108815 3251787 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 12:10:16.112224 3251787 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 12:10:16.112250 3251787 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 12:10:16.112262 3251787 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 12:10:16.112315 3251787 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 12:10:16.112390 3251787 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 12:10:16.112522 3251787 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 12:10:16.120313 3251787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 12:10:16.139293 3251787 start.go:296] duration metric: took 144.202373ms for postStartSetup
	I1217 12:10:16.139673 3251787 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-348887
	I1217 12:10:16.157176 3251787 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/config.json ...
	I1217 12:10:16.157469 3251787 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 12:10:16.157519 3251787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-348887
	I1217 12:10:16.174138 3251787 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36068 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/calico-348887/id_ed25519 Username:docker}
	I1217 12:10:16.270016 3251787 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 12:10:16.274965 3251787 start.go:128] duration metric: took 9.414618317s to createHost
	I1217 12:10:16.274998 3251787 start.go:83] releasing machines lock for "calico-348887", held for 9.414755421s
	I1217 12:10:16.275069 3251787 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-348887
	I1217 12:10:16.293004 3251787 ssh_runner.go:195] Run: cat /version.json
	I1217 12:10:16.293055 3251787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-348887
	I1217 12:10:16.293306 3251787 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 12:10:16.293354 3251787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-348887
	I1217 12:10:16.324401 3251787 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36068 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/calico-348887/id_ed25519 Username:docker}
	I1217 12:10:16.336640 3251787 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36068 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/calico-348887/id_ed25519 Username:docker}
	I1217 12:10:16.512316 3251787 ssh_runner.go:195] Run: systemctl --version
	I1217 12:10:16.520717 3251787 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 12:10:16.525245 3251787 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 12:10:16.525336 3251787 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 12:10:16.555386 3251787 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 12:10:16.555423 3251787 start.go:496] detecting cgroup driver to use...
	I1217 12:10:16.555461 3251787 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 12:10:16.555516 3251787 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 12:10:16.571924 3251787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 12:10:16.585957 3251787 docker.go:218] disabling cri-docker service (if available) ...
	I1217 12:10:16.586048 3251787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 12:10:16.604706 3251787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 12:10:16.624015 3251787 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 12:10:16.746633 3251787 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 12:10:16.880740 3251787 docker.go:234] disabling docker service ...
	I1217 12:10:16.880832 3251787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 12:10:16.904611 3251787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 12:10:16.918368 3251787 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 12:10:17.055320 3251787 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 12:10:17.172944 3251787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 12:10:17.186246 3251787 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 12:10:17.200841 3251787 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 12:10:17.210515 3251787 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 12:10:17.220250 3251787 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 12:10:17.220348 3251787 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 12:10:17.230251 3251787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 12:10:17.238986 3251787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 12:10:17.247986 3251787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 12:10:17.257041 3251787 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 12:10:17.265284 3251787 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 12:10:17.274533 3251787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 12:10:17.283523 3251787 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 12:10:17.293128 3251787 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 12:10:17.301242 3251787 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 12:10:17.309125 3251787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:10:17.417253 3251787 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 12:10:17.561285 3251787 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 12:10:17.561375 3251787 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 12:10:17.565361 3251787 start.go:564] Will wait 60s for crictl version
	I1217 12:10:17.565434 3251787 ssh_runner.go:195] Run: which crictl
	I1217 12:10:17.569279 3251787 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 12:10:17.592905 3251787 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 12:10:17.593264 3251787 ssh_runner.go:195] Run: containerd --version
	I1217 12:10:17.621289 3251787 ssh_runner.go:195] Run: containerd --version
	I1217 12:10:17.649371 3251787 out.go:179] * Preparing Kubernetes v1.34.3 on containerd 2.2.0 ...
	I1217 12:10:17.652303 3251787 cli_runner.go:164] Run: docker network inspect calico-348887 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 12:10:17.668220 3251787 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 12:10:17.672208 3251787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 12:10:17.684085 3251787 kubeadm.go:884] updating cluster {Name:calico-348887 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-348887 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 12:10:17.684209 3251787 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1217 12:10:17.684283 3251787 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:10:17.710732 3251787 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 12:10:17.710758 3251787 containerd.go:534] Images already preloaded, skipping extraction
	I1217 12:10:17.710820 3251787 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:10:17.735853 3251787 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 12:10:17.735879 3251787 cache_images.go:86] Images are preloaded, skipping loading
	I1217 12:10:17.735888 3251787 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.3 containerd true true} ...
	I1217 12:10:17.736008 3251787 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-348887 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:calico-348887 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1217 12:10:17.736077 3251787 ssh_runner.go:195] Run: sudo crictl info
	I1217 12:10:17.766849 3251787 cni.go:84] Creating CNI manager for "calico"
	I1217 12:10:17.766876 3251787 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 12:10:17.766899 3251787 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-348887 NodeName:calico-348887 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 12:10:17.767014 3251787 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "calico-348887"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 12:10:17.767080 3251787 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 12:10:17.776244 3251787 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 12:10:17.776368 3251787 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 12:10:17.785425 3251787 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1217 12:10:17.800321 3251787 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 12:10:17.815924 3251787 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1217 12:10:17.831514 3251787 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 12:10:17.835242 3251787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 12:10:17.845227 3251787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:10:17.961288 3251787 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 12:10:17.977585 3251787 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887 for IP: 192.168.76.2
	I1217 12:10:17.977607 3251787 certs.go:195] generating shared ca certs ...
	I1217 12:10:17.977625 3251787 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:10:17.977763 3251787 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 12:10:17.977813 3251787 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 12:10:17.977824 3251787 certs.go:257] generating profile certs ...
	I1217 12:10:17.977877 3251787 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/client.key
	I1217 12:10:17.977894 3251787 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/client.crt with IP's: []
	I1217 12:10:18.356088 3251787 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/client.crt ...
	I1217 12:10:18.356123 3251787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/client.crt: {Name:mk6a750026a3bb85d69fc2c36626d4592a5870bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:10:18.356323 3251787 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/client.key ...
	I1217 12:10:18.356336 3251787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/client.key: {Name:mk672fecdba6e594b78739c05e013099586f3f7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:10:18.356451 3251787 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/apiserver.key.7ad22dc1
	I1217 12:10:18.356474 3251787 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/apiserver.crt.7ad22dc1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1217 12:10:18.777759 3251787 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/apiserver.crt.7ad22dc1 ...
	I1217 12:10:18.777791 3251787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/apiserver.crt.7ad22dc1: {Name:mkc07e4adfd1b365a67a5ca1b1914ee091997135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:10:18.777985 3251787 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/apiserver.key.7ad22dc1 ...
	I1217 12:10:18.778000 3251787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/apiserver.key.7ad22dc1: {Name:mk3750897492d60d3c3a83d2d540ec2227d99828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:10:18.778084 3251787 certs.go:382] copying /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/apiserver.crt.7ad22dc1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/apiserver.crt
	I1217 12:10:18.778172 3251787 certs.go:386] copying /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/apiserver.key.7ad22dc1 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/apiserver.key
	I1217 12:10:18.778234 3251787 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/proxy-client.key
	I1217 12:10:18.778256 3251787 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/proxy-client.crt with IP's: []
	I1217 12:10:18.955516 3251787 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/proxy-client.crt ...
	I1217 12:10:18.955547 3251787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/proxy-client.crt: {Name:mk475ec19dc686fd1258a87242510ad5841dc0ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:10:18.955738 3251787 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/proxy-client.key ...
	I1217 12:10:18.955756 3251787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/proxy-client.key: {Name:mkb2a5a02b275bc553528de93c0f218c8c8b8fd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:10:18.955959 3251787 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 12:10:18.956010 3251787 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 12:10:18.956021 3251787 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 12:10:18.956055 3251787 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 12:10:18.956086 3251787 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 12:10:18.956116 3251787 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 12:10:18.956162 3251787 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 12:10:18.956856 3251787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 12:10:18.976281 3251787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 12:10:18.995252 3251787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 12:10:19.017995 3251787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 12:10:19.037089 3251787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 12:10:19.058016 3251787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 12:10:19.083632 3251787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 12:10:19.102732 3251787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 12:10:19.122339 3251787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 12:10:19.140322 3251787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 12:10:19.159361 3251787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 12:10:19.178010 3251787 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 12:10:19.192087 3251787 ssh_runner.go:195] Run: openssl version
	I1217 12:10:19.198861 3251787 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:10:19.207110 3251787 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 12:10:19.215457 3251787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:10:19.219683 3251787 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:10:19.219774 3251787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:10:19.262020 3251787 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 12:10:19.271158 3251787 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 12:10:19.280007 3251787 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 12:10:19.289078 3251787 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 12:10:19.298067 3251787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 12:10:19.303088 3251787 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 12:10:19.303212 3251787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 12:10:19.347293 3251787 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 12:10:19.355280 3251787 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2924574.pem /etc/ssl/certs/51391683.0
	I1217 12:10:19.363322 3251787 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 12:10:19.371450 3251787 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 12:10:19.379413 3251787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 12:10:19.383204 3251787 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 12:10:19.383272 3251787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 12:10:19.424609 3251787 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 12:10:19.432540 3251787 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/29245742.pem /etc/ssl/certs/3ec20f2e.0
	I1217 12:10:19.440509 3251787 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 12:10:19.444216 3251787 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 12:10:19.444279 3251787 kubeadm.go:401] StartCluster: {Name:calico-348887 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-348887 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:10:19.444356 3251787 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 12:10:19.444515 3251787 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 12:10:19.470591 3251787 cri.go:89] found id: ""
	I1217 12:10:19.470690 3251787 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 12:10:19.478711 3251787 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 12:10:19.486802 3251787 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 12:10:19.486888 3251787 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 12:10:19.495484 3251787 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 12:10:19.495508 3251787 kubeadm.go:158] found existing configuration files:
	
	I1217 12:10:19.495564 3251787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 12:10:19.504215 3251787 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 12:10:19.504283 3251787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 12:10:19.512348 3251787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 12:10:19.522667 3251787 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 12:10:19.522735 3251787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 12:10:19.533206 3251787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 12:10:19.542390 3251787 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 12:10:19.542500 3251787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 12:10:19.550974 3251787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 12:10:19.561012 3251787 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 12:10:19.561140 3251787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 12:10:19.570809 3251787 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 12:10:19.614371 3251787 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 12:10:19.614828 3251787 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 12:10:19.640603 3251787 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 12:10:19.640681 3251787 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 12:10:19.640723 3251787 kubeadm.go:319] OS: Linux
	I1217 12:10:19.640774 3251787 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 12:10:19.640826 3251787 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 12:10:19.640881 3251787 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 12:10:19.640932 3251787 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 12:10:19.640982 3251787 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 12:10:19.641033 3251787 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 12:10:19.641083 3251787 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 12:10:19.641143 3251787 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 12:10:19.641202 3251787 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 12:10:19.711262 3251787 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 12:10:19.711476 3251787 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 12:10:19.711623 3251787 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 12:10:19.718311 3251787 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 12:10:19.724948 3251787 out.go:252]   - Generating certificates and keys ...
	I1217 12:10:19.725118 3251787 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 12:10:19.725206 3251787 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 12:10:20.323545 3251787 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 12:10:20.742178 3251787 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 12:10:20.970777 3251787 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 12:10:21.523027 3251787 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 12:10:21.802515 3251787 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 12:10:21.802827 3251787 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-348887 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 12:10:21.965758 3251787 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 12:10:21.965900 3251787 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-348887 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 12:10:23.209116 3251787 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 12:10:23.759221 3251787 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 12:10:24.162871 3251787 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 12:10:24.163202 3251787 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 12:10:24.658380 3251787 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 12:10:25.527821 3251787 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 12:10:25.859623 3251787 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 12:10:26.422247 3251787 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 12:10:27.535921 3251787 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 12:10:27.536986 3251787 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 12:10:27.540094 3251787 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 12:10:27.543577 3251787 out.go:252]   - Booting up control plane ...
	I1217 12:10:27.543687 3251787 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 12:10:27.556923 3251787 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 12:10:27.558398 3251787 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 12:10:27.577287 3251787 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 12:10:27.577433 3251787 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 12:10:27.586471 3251787 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 12:10:27.586832 3251787 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 12:10:27.587027 3251787 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 12:10:27.716909 3251787 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 12:10:27.717031 3251787 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 12:10:28.718102 3251787 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001744684s
	I1217 12:10:28.721814 3251787 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 12:10:28.721912 3251787 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1217 12:10:28.722002 3251787 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 12:10:28.722085 3251787 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 12:10:31.723023 3251787 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.000754465s
	I1217 12:10:33.016580 3251787 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.294739961s
	I1217 12:10:34.723450 3251787 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001351207s
	I1217 12:10:34.761614 3251787 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 12:10:34.779718 3251787 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 12:10:34.798198 3251787 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 12:10:34.798621 3251787 kubeadm.go:319] [mark-control-plane] Marking the node calico-348887 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 12:10:34.815207 3251787 kubeadm.go:319] [bootstrap-token] Using token: 1kf66x.fgtes217j3b74ehz
	I1217 12:10:34.818219 3251787 out.go:252]   - Configuring RBAC rules ...
	I1217 12:10:34.818364 3251787 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 12:10:34.823479 3251787 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 12:10:34.833761 3251787 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 12:10:34.838722 3251787 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 12:10:34.843060 3251787 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 12:10:34.849257 3251787 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 12:10:35.133474 3251787 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 12:10:35.570678 3251787 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 12:10:36.130159 3251787 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 12:10:36.131241 3251787 kubeadm.go:319] 
	I1217 12:10:36.131318 3251787 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 12:10:36.131330 3251787 kubeadm.go:319] 
	I1217 12:10:36.131408 3251787 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 12:10:36.131417 3251787 kubeadm.go:319] 
	I1217 12:10:36.131449 3251787 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 12:10:36.131513 3251787 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 12:10:36.131568 3251787 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 12:10:36.131575 3251787 kubeadm.go:319] 
	I1217 12:10:36.131629 3251787 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 12:10:36.131638 3251787 kubeadm.go:319] 
	I1217 12:10:36.131686 3251787 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 12:10:36.131694 3251787 kubeadm.go:319] 
	I1217 12:10:36.131745 3251787 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 12:10:36.131823 3251787 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 12:10:36.131895 3251787 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 12:10:36.131903 3251787 kubeadm.go:319] 
	I1217 12:10:36.131989 3251787 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 12:10:36.132070 3251787 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 12:10:36.132080 3251787 kubeadm.go:319] 
	I1217 12:10:36.132164 3251787 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 1kf66x.fgtes217j3b74ehz \
	I1217 12:10:36.132273 3251787 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fcce8c321665b01ba73c1ff2f8ce9b2c8663c804203e09b134f0c8209e98634e \
	I1217 12:10:36.132296 3251787 kubeadm.go:319] 	--control-plane 
	I1217 12:10:36.132304 3251787 kubeadm.go:319] 
	I1217 12:10:36.132388 3251787 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 12:10:36.132402 3251787 kubeadm.go:319] 
	I1217 12:10:36.132516 3251787 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 1kf66x.fgtes217j3b74ehz \
	I1217 12:10:36.132625 3251787 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fcce8c321665b01ba73c1ff2f8ce9b2c8663c804203e09b134f0c8209e98634e 
	I1217 12:10:36.136981 3251787 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1217 12:10:36.137236 3251787 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 12:10:36.137356 3251787 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 12:10:36.137382 3251787 cni.go:84] Creating CNI manager for "calico"
	I1217 12:10:36.140581 3251787 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1217 12:10:36.143424 3251787 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 12:10:36.143481 3251787 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (329943 bytes)
	I1217 12:10:36.159999 3251787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 12:10:37.670757 3251787 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.510721188s)
	I1217 12:10:37.670863 3251787 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 12:10:37.670975 3251787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:10:37.671062 3251787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-348887 minikube.k8s.io/updated_at=2025_12_17T12_10_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144 minikube.k8s.io/name=calico-348887 minikube.k8s.io/primary=true
	I1217 12:10:37.900868 3251787 ops.go:34] apiserver oom_adj: -16
	I1217 12:10:37.900989 3251787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:10:38.401267 3251787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:10:38.901664 3251787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:10:39.401133 3251787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:10:39.901884 3251787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:10:40.401295 3251787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:10:40.509698 3251787 kubeadm.go:1114] duration metric: took 2.83877257s to wait for elevateKubeSystemPrivileges
	I1217 12:10:40.509728 3251787 kubeadm.go:403] duration metric: took 21.065454998s to StartCluster
	I1217 12:10:40.509745 3251787 settings.go:142] acquiring lock: {Name:mkbe14d68dd5ff3fa1749157c50305d115f5a24d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:10:40.509810 3251787 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 12:10:40.510774 3251787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:10:40.510993 3251787 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 12:10:40.511162 3251787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 12:10:40.511575 3251787 config.go:182] Loaded profile config "calico-348887": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1217 12:10:40.511626 3251787 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 12:10:40.511693 3251787 addons.go:70] Setting storage-provisioner=true in profile "calico-348887"
	I1217 12:10:40.511713 3251787 addons.go:239] Setting addon storage-provisioner=true in "calico-348887"
	I1217 12:10:40.511742 3251787 host.go:66] Checking if "calico-348887" exists ...
	I1217 12:10:40.512221 3251787 cli_runner.go:164] Run: docker container inspect calico-348887 --format={{.State.Status}}
	I1217 12:10:40.512502 3251787 addons.go:70] Setting default-storageclass=true in profile "calico-348887"
	I1217 12:10:40.512521 3251787 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-348887"
	I1217 12:10:40.512791 3251787 cli_runner.go:164] Run: docker container inspect calico-348887 --format={{.State.Status}}
	I1217 12:10:40.516788 3251787 out.go:179] * Verifying Kubernetes components...
	I1217 12:10:40.520562 3251787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:10:40.551933 3251787 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 12:10:40.554906 3251787 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:10:40.554937 3251787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 12:10:40.555017 3251787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-348887
	I1217 12:10:40.564499 3251787 addons.go:239] Setting addon default-storageclass=true in "calico-348887"
	I1217 12:10:40.564550 3251787 host.go:66] Checking if "calico-348887" exists ...
	I1217 12:10:40.565088 3251787 cli_runner.go:164] Run: docker container inspect calico-348887 --format={{.State.Status}}
	I1217 12:10:40.590876 3251787 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36068 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/calico-348887/id_ed25519 Username:docker}
	I1217 12:10:40.605664 3251787 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 12:10:40.605692 3251787 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 12:10:40.605758 3251787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-348887
	I1217 12:10:40.641938 3251787 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36068 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/calico-348887/id_ed25519 Username:docker}
	I1217 12:10:40.869936 3251787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 12:10:40.870117 3251787 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 12:10:40.955987 3251787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:10:41.060116 3251787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:10:41.694781 3251787 node_ready.go:35] waiting up to 15m0s for node "calico-348887" to be "Ready" ...
	I1217 12:10:41.694895 3251787 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1217 12:10:42.143614 3251787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.083462162s)
	I1217 12:10:42.147529 3251787 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1217 12:10:42.149829 3251787 addons.go:530] duration metric: took 1.638191284s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1217 12:10:42.200213 3251787 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-348887" context rescaled to 1 replicas
	W1217 12:10:43.702715 3251787 node_ready.go:57] node "calico-348887" has "Ready":"False" status (will retry)
	W1217 12:10:46.199106 3251787 node_ready.go:57] node "calico-348887" has "Ready":"False" status (will retry)
	I1217 12:10:47.698321 3251787 node_ready.go:49] node "calico-348887" is "Ready"
	I1217 12:10:47.698350 3251787 node_ready.go:38] duration metric: took 6.002612424s for node "calico-348887" to be "Ready" ...
	I1217 12:10:47.698362 3251787 api_server.go:52] waiting for apiserver process to appear ...
	I1217 12:10:47.698423 3251787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:10:47.712003 3251787 api_server.go:72] duration metric: took 7.200984264s to wait for apiserver process to appear ...
	I1217 12:10:47.712030 3251787 api_server.go:88] waiting for apiserver healthz status ...
	I1217 12:10:47.712049 3251787 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 12:10:47.721633 3251787 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1217 12:10:47.722790 3251787 api_server.go:141] control plane version: v1.34.3
	I1217 12:10:47.722814 3251787 api_server.go:131] duration metric: took 10.777308ms to wait for apiserver health ...
	I1217 12:10:47.722824 3251787 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 12:10:47.726867 3251787 system_pods.go:59] 9 kube-system pods found
	I1217 12:10:47.726914 3251787 system_pods.go:61] "calico-kube-controllers-5c676f698c-njklq" [ec268678-ffdf-4319-9bbd-e0c9012e6f41] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 12:10:47.726925 3251787 system_pods.go:61] "calico-node-x5xh8" [bd2824c5-0164-45d4-9070-4991ccae38c6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 12:10:47.726932 3251787 system_pods.go:61] "coredns-66bc5c9577-27txb" [fd4c7d1c-7df0-4c2d-8e3e-25771928bd9d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:10:47.726944 3251787 system_pods.go:61] "etcd-calico-348887" [ca2c9511-f410-4c47-854e-611e0b376a33] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 12:10:47.726951 3251787 system_pods.go:61] "kube-apiserver-calico-348887" [b11afc9c-3715-4a3a-ad87-756a53087fcc] Running
	I1217 12:10:47.726957 3251787 system_pods.go:61] "kube-controller-manager-calico-348887" [276e14c8-734f-4a86-bd8d-bd98ba46bdc3] Running
	I1217 12:10:47.726963 3251787 system_pods.go:61] "kube-proxy-2rlcl" [e56a80a9-f45d-4315-ad7c-b22700828b5d] Running
	I1217 12:10:47.726967 3251787 system_pods.go:61] "kube-scheduler-calico-348887" [a360243f-8b44-40f7-bed7-2c6485405f09] Running
	I1217 12:10:47.726974 3251787 system_pods.go:61] "storage-provisioner" [b2076b19-9fc7-44d9-a9a0-7b54420b9f79] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 12:10:47.726985 3251787 system_pods.go:74] duration metric: took 4.15464ms to wait for pod list to return data ...
	I1217 12:10:47.726993 3251787 default_sa.go:34] waiting for default service account to be created ...
	I1217 12:10:47.729730 3251787 default_sa.go:45] found service account: "default"
	I1217 12:10:47.729796 3251787 default_sa.go:55] duration metric: took 2.796505ms for default service account to be created ...
	I1217 12:10:47.729825 3251787 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 12:10:47.733000 3251787 system_pods.go:86] 9 kube-system pods found
	I1217 12:10:47.733040 3251787 system_pods.go:89] "calico-kube-controllers-5c676f698c-njklq" [ec268678-ffdf-4319-9bbd-e0c9012e6f41] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 12:10:47.733055 3251787 system_pods.go:89] "calico-node-x5xh8" [bd2824c5-0164-45d4-9070-4991ccae38c6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 12:10:47.733062 3251787 system_pods.go:89] "coredns-66bc5c9577-27txb" [fd4c7d1c-7df0-4c2d-8e3e-25771928bd9d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:10:47.733069 3251787 system_pods.go:89] "etcd-calico-348887" [ca2c9511-f410-4c47-854e-611e0b376a33] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 12:10:47.733076 3251787 system_pods.go:89] "kube-apiserver-calico-348887" [b11afc9c-3715-4a3a-ad87-756a53087fcc] Running
	I1217 12:10:47.733087 3251787 system_pods.go:89] "kube-controller-manager-calico-348887" [276e14c8-734f-4a86-bd8d-bd98ba46bdc3] Running
	I1217 12:10:47.733091 3251787 system_pods.go:89] "kube-proxy-2rlcl" [e56a80a9-f45d-4315-ad7c-b22700828b5d] Running
	I1217 12:10:47.733104 3251787 system_pods.go:89] "kube-scheduler-calico-348887" [a360243f-8b44-40f7-bed7-2c6485405f09] Running
	I1217 12:10:47.733109 3251787 system_pods.go:89] "storage-provisioner" [b2076b19-9fc7-44d9-a9a0-7b54420b9f79] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 12:10:47.733134 3251787 retry.go:31] will retry after 214.308792ms: missing components: kube-dns
	I1217 12:10:47.954092 3251787 system_pods.go:86] 9 kube-system pods found
	I1217 12:10:47.954137 3251787 system_pods.go:89] "calico-kube-controllers-5c676f698c-njklq" [ec268678-ffdf-4319-9bbd-e0c9012e6f41] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 12:10:47.954153 3251787 system_pods.go:89] "calico-node-x5xh8" [bd2824c5-0164-45d4-9070-4991ccae38c6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 12:10:47.954160 3251787 system_pods.go:89] "coredns-66bc5c9577-27txb" [fd4c7d1c-7df0-4c2d-8e3e-25771928bd9d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:10:47.954167 3251787 system_pods.go:89] "etcd-calico-348887" [ca2c9511-f410-4c47-854e-611e0b376a33] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 12:10:47.954188 3251787 system_pods.go:89] "kube-apiserver-calico-348887" [b11afc9c-3715-4a3a-ad87-756a53087fcc] Running
	I1217 12:10:47.954207 3251787 system_pods.go:89] "kube-controller-manager-calico-348887" [276e14c8-734f-4a86-bd8d-bd98ba46bdc3] Running
	I1217 12:10:47.954212 3251787 system_pods.go:89] "kube-proxy-2rlcl" [e56a80a9-f45d-4315-ad7c-b22700828b5d] Running
	I1217 12:10:47.954216 3251787 system_pods.go:89] "kube-scheduler-calico-348887" [a360243f-8b44-40f7-bed7-2c6485405f09] Running
	I1217 12:10:47.954222 3251787 system_pods.go:89] "storage-provisioner" [b2076b19-9fc7-44d9-a9a0-7b54420b9f79] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 12:10:47.954244 3251787 retry.go:31] will retry after 378.715911ms: missing components: kube-dns
	I1217 12:10:48.338226 3251787 system_pods.go:86] 9 kube-system pods found
	I1217 12:10:48.338279 3251787 system_pods.go:89] "calico-kube-controllers-5c676f698c-njklq" [ec268678-ffdf-4319-9bbd-e0c9012e6f41] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 12:10:48.338290 3251787 system_pods.go:89] "calico-node-x5xh8" [bd2824c5-0164-45d4-9070-4991ccae38c6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 12:10:48.338297 3251787 system_pods.go:89] "coredns-66bc5c9577-27txb" [fd4c7d1c-7df0-4c2d-8e3e-25771928bd9d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:10:48.338312 3251787 system_pods.go:89] "etcd-calico-348887" [ca2c9511-f410-4c47-854e-611e0b376a33] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 12:10:48.338329 3251787 system_pods.go:89] "kube-apiserver-calico-348887" [b11afc9c-3715-4a3a-ad87-756a53087fcc] Running
	I1217 12:10:48.338341 3251787 system_pods.go:89] "kube-controller-manager-calico-348887" [276e14c8-734f-4a86-bd8d-bd98ba46bdc3] Running
	I1217 12:10:48.338346 3251787 system_pods.go:89] "kube-proxy-2rlcl" [e56a80a9-f45d-4315-ad7c-b22700828b5d] Running
	I1217 12:10:48.338349 3251787 system_pods.go:89] "kube-scheduler-calico-348887" [a360243f-8b44-40f7-bed7-2c6485405f09] Running
	I1217 12:10:48.338356 3251787 system_pods.go:89] "storage-provisioner" [b2076b19-9fc7-44d9-a9a0-7b54420b9f79] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 12:10:48.338385 3251787 retry.go:31] will retry after 319.184556ms: missing components: kube-dns
	I1217 12:10:48.662607 3251787 system_pods.go:86] 9 kube-system pods found
	I1217 12:10:48.662646 3251787 system_pods.go:89] "calico-kube-controllers-5c676f698c-njklq" [ec268678-ffdf-4319-9bbd-e0c9012e6f41] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 12:10:48.662657 3251787 system_pods.go:89] "calico-node-x5xh8" [bd2824c5-0164-45d4-9070-4991ccae38c6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 12:10:48.662665 3251787 system_pods.go:89] "coredns-66bc5c9577-27txb" [fd4c7d1c-7df0-4c2d-8e3e-25771928bd9d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:10:48.662670 3251787 system_pods.go:89] "etcd-calico-348887" [ca2c9511-f410-4c47-854e-611e0b376a33] Running
	I1217 12:10:48.662675 3251787 system_pods.go:89] "kube-apiserver-calico-348887" [b11afc9c-3715-4a3a-ad87-756a53087fcc] Running
	I1217 12:10:48.662680 3251787 system_pods.go:89] "kube-controller-manager-calico-348887" [276e14c8-734f-4a86-bd8d-bd98ba46bdc3] Running
	I1217 12:10:48.662684 3251787 system_pods.go:89] "kube-proxy-2rlcl" [e56a80a9-f45d-4315-ad7c-b22700828b5d] Running
	I1217 12:10:48.662688 3251787 system_pods.go:89] "kube-scheduler-calico-348887" [a360243f-8b44-40f7-bed7-2c6485405f09] Running
	I1217 12:10:48.662696 3251787 system_pods.go:89] "storage-provisioner" [b2076b19-9fc7-44d9-a9a0-7b54420b9f79] Running
	I1217 12:10:48.662711 3251787 retry.go:31] will retry after 445.537734ms: missing components: kube-dns
	I1217 12:10:49.113102 3251787 system_pods.go:86] 9 kube-system pods found
	I1217 12:10:49.113136 3251787 system_pods.go:89] "calico-kube-controllers-5c676f698c-njklq" [ec268678-ffdf-4319-9bbd-e0c9012e6f41] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 12:10:49.113149 3251787 system_pods.go:89] "calico-node-x5xh8" [bd2824c5-0164-45d4-9070-4991ccae38c6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 12:10:49.113156 3251787 system_pods.go:89] "coredns-66bc5c9577-27txb" [fd4c7d1c-7df0-4c2d-8e3e-25771928bd9d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:10:49.113161 3251787 system_pods.go:89] "etcd-calico-348887" [ca2c9511-f410-4c47-854e-611e0b376a33] Running
	I1217 12:10:49.113166 3251787 system_pods.go:89] "kube-apiserver-calico-348887" [b11afc9c-3715-4a3a-ad87-756a53087fcc] Running
	I1217 12:10:49.113170 3251787 system_pods.go:89] "kube-controller-manager-calico-348887" [276e14c8-734f-4a86-bd8d-bd98ba46bdc3] Running
	I1217 12:10:49.113174 3251787 system_pods.go:89] "kube-proxy-2rlcl" [e56a80a9-f45d-4315-ad7c-b22700828b5d] Running
	I1217 12:10:49.113178 3251787 system_pods.go:89] "kube-scheduler-calico-348887" [a360243f-8b44-40f7-bed7-2c6485405f09] Running
	I1217 12:10:49.113182 3251787 system_pods.go:89] "storage-provisioner" [b2076b19-9fc7-44d9-a9a0-7b54420b9f79] Running
	I1217 12:10:49.113197 3251787 retry.go:31] will retry after 617.409864ms: missing components: kube-dns
	I1217 12:10:49.734855 3251787 system_pods.go:86] 9 kube-system pods found
	I1217 12:10:49.734895 3251787 system_pods.go:89] "calico-kube-controllers-5c676f698c-njklq" [ec268678-ffdf-4319-9bbd-e0c9012e6f41] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 12:10:49.734905 3251787 system_pods.go:89] "calico-node-x5xh8" [bd2824c5-0164-45d4-9070-4991ccae38c6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 12:10:49.734913 3251787 system_pods.go:89] "coredns-66bc5c9577-27txb" [fd4c7d1c-7df0-4c2d-8e3e-25771928bd9d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:10:49.734917 3251787 system_pods.go:89] "etcd-calico-348887" [ca2c9511-f410-4c47-854e-611e0b376a33] Running
	I1217 12:10:49.734923 3251787 system_pods.go:89] "kube-apiserver-calico-348887" [b11afc9c-3715-4a3a-ad87-756a53087fcc] Running
	I1217 12:10:49.734933 3251787 system_pods.go:89] "kube-controller-manager-calico-348887" [276e14c8-734f-4a86-bd8d-bd98ba46bdc3] Running
	I1217 12:10:49.734943 3251787 system_pods.go:89] "kube-proxy-2rlcl" [e56a80a9-f45d-4315-ad7c-b22700828b5d] Running
	I1217 12:10:49.734948 3251787 system_pods.go:89] "kube-scheduler-calico-348887" [a360243f-8b44-40f7-bed7-2c6485405f09] Running
	I1217 12:10:49.734952 3251787 system_pods.go:89] "storage-provisioner" [b2076b19-9fc7-44d9-a9a0-7b54420b9f79] Running
	I1217 12:10:49.734972 3251787 retry.go:31] will retry after 943.965946ms: missing components: kube-dns
	I1217 12:10:50.685516 3251787 system_pods.go:86] 9 kube-system pods found
	I1217 12:10:50.685566 3251787 system_pods.go:89] "calico-kube-controllers-5c676f698c-njklq" [ec268678-ffdf-4319-9bbd-e0c9012e6f41] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 12:10:50.685579 3251787 system_pods.go:89] "calico-node-x5xh8" [bd2824c5-0164-45d4-9070-4991ccae38c6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 12:10:50.685592 3251787 system_pods.go:89] "coredns-66bc5c9577-27txb" [fd4c7d1c-7df0-4c2d-8e3e-25771928bd9d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:10:50.685607 3251787 system_pods.go:89] "etcd-calico-348887" [ca2c9511-f410-4c47-854e-611e0b376a33] Running
	I1217 12:10:50.685629 3251787 system_pods.go:89] "kube-apiserver-calico-348887" [b11afc9c-3715-4a3a-ad87-756a53087fcc] Running
	I1217 12:10:50.685633 3251787 system_pods.go:89] "kube-controller-manager-calico-348887" [276e14c8-734f-4a86-bd8d-bd98ba46bdc3] Running
	I1217 12:10:50.685638 3251787 system_pods.go:89] "kube-proxy-2rlcl" [e56a80a9-f45d-4315-ad7c-b22700828b5d] Running
	I1217 12:10:50.685650 3251787 system_pods.go:89] "kube-scheduler-calico-348887" [a360243f-8b44-40f7-bed7-2c6485405f09] Running
	I1217 12:10:50.685655 3251787 system_pods.go:89] "storage-provisioner" [b2076b19-9fc7-44d9-a9a0-7b54420b9f79] Running
	I1217 12:10:50.685669 3251787 retry.go:31] will retry after 949.068173ms: missing components: kube-dns
	I1217 12:10:51.638285 3251787 system_pods.go:86] 9 kube-system pods found
	I1217 12:10:51.638327 3251787 system_pods.go:89] "calico-kube-controllers-5c676f698c-njklq" [ec268678-ffdf-4319-9bbd-e0c9012e6f41] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 12:10:51.638338 3251787 system_pods.go:89] "calico-node-x5xh8" [bd2824c5-0164-45d4-9070-4991ccae38c6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 12:10:51.638345 3251787 system_pods.go:89] "coredns-66bc5c9577-27txb" [fd4c7d1c-7df0-4c2d-8e3e-25771928bd9d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:10:51.638351 3251787 system_pods.go:89] "etcd-calico-348887" [ca2c9511-f410-4c47-854e-611e0b376a33] Running
	I1217 12:10:51.638357 3251787 system_pods.go:89] "kube-apiserver-calico-348887" [b11afc9c-3715-4a3a-ad87-756a53087fcc] Running
	I1217 12:10:51.638361 3251787 system_pods.go:89] "kube-controller-manager-calico-348887" [276e14c8-734f-4a86-bd8d-bd98ba46bdc3] Running
	I1217 12:10:51.638372 3251787 system_pods.go:89] "kube-proxy-2rlcl" [e56a80a9-f45d-4315-ad7c-b22700828b5d] Running
	I1217 12:10:51.638376 3251787 system_pods.go:89] "kube-scheduler-calico-348887" [a360243f-8b44-40f7-bed7-2c6485405f09] Running
	I1217 12:10:51.638387 3251787 system_pods.go:89] "storage-provisioner" [b2076b19-9fc7-44d9-a9a0-7b54420b9f79] Running
	I1217 12:10:51.638401 3251787 retry.go:31] will retry after 1.362146442s: missing components: kube-dns
	I1217 12:10:53.008641 3251787 system_pods.go:86] 9 kube-system pods found
	I1217 12:10:53.008684 3251787 system_pods.go:89] "calico-kube-controllers-5c676f698c-njklq" [ec268678-ffdf-4319-9bbd-e0c9012e6f41] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 12:10:53.008696 3251787 system_pods.go:89] "calico-node-x5xh8" [bd2824c5-0164-45d4-9070-4991ccae38c6] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 12:10:53.008704 3251787 system_pods.go:89] "coredns-66bc5c9577-27txb" [fd4c7d1c-7df0-4c2d-8e3e-25771928bd9d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:10:53.008709 3251787 system_pods.go:89] "etcd-calico-348887" [ca2c9511-f410-4c47-854e-611e0b376a33] Running
	I1217 12:10:53.008715 3251787 system_pods.go:89] "kube-apiserver-calico-348887" [b11afc9c-3715-4a3a-ad87-756a53087fcc] Running
	I1217 12:10:53.008720 3251787 system_pods.go:89] "kube-controller-manager-calico-348887" [276e14c8-734f-4a86-bd8d-bd98ba46bdc3] Running
	I1217 12:10:53.008731 3251787 system_pods.go:89] "kube-proxy-2rlcl" [e56a80a9-f45d-4315-ad7c-b22700828b5d] Running
	I1217 12:10:53.008735 3251787 system_pods.go:89] "kube-scheduler-calico-348887" [a360243f-8b44-40f7-bed7-2c6485405f09] Running
	I1217 12:10:53.008746 3251787 system_pods.go:89] "storage-provisioner" [b2076b19-9fc7-44d9-a9a0-7b54420b9f79] Running
	I1217 12:10:53.008763 3251787 retry.go:31] will retry after 1.811971734s: missing components: kube-dns
	I1217 12:10:54.824870 3251787 system_pods.go:86] 9 kube-system pods found
	I1217 12:10:54.824908 3251787 system_pods.go:89] "calico-kube-controllers-5c676f698c-njklq" [ec268678-ffdf-4319-9bbd-e0c9012e6f41] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 12:10:54.824917 3251787 system_pods.go:89] "calico-node-x5xh8" [bd2824c5-0164-45d4-9070-4991ccae38c6] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 12:10:54.824926 3251787 system_pods.go:89] "coredns-66bc5c9577-27txb" [fd4c7d1c-7df0-4c2d-8e3e-25771928bd9d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:10:54.824933 3251787 system_pods.go:89] "etcd-calico-348887" [ca2c9511-f410-4c47-854e-611e0b376a33] Running
	I1217 12:10:54.824938 3251787 system_pods.go:89] "kube-apiserver-calico-348887" [b11afc9c-3715-4a3a-ad87-756a53087fcc] Running
	I1217 12:10:54.824942 3251787 system_pods.go:89] "kube-controller-manager-calico-348887" [276e14c8-734f-4a86-bd8d-bd98ba46bdc3] Running
	I1217 12:10:54.824946 3251787 system_pods.go:89] "kube-proxy-2rlcl" [e56a80a9-f45d-4315-ad7c-b22700828b5d] Running
	I1217 12:10:54.824956 3251787 system_pods.go:89] "kube-scheduler-calico-348887" [a360243f-8b44-40f7-bed7-2c6485405f09] Running
	I1217 12:10:54.824959 3251787 system_pods.go:89] "storage-provisioner" [b2076b19-9fc7-44d9-a9a0-7b54420b9f79] Running
	I1217 12:10:54.824974 3251787 retry.go:31] will retry after 2.058286452s: missing components: kube-dns
	I1217 12:10:56.887757 3251787 system_pods.go:86] 9 kube-system pods found
	I1217 12:10:56.887797 3251787 system_pods.go:89] "calico-kube-controllers-5c676f698c-njklq" [ec268678-ffdf-4319-9bbd-e0c9012e6f41] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 12:10:56.887807 3251787 system_pods.go:89] "calico-node-x5xh8" [bd2824c5-0164-45d4-9070-4991ccae38c6] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 12:10:56.887817 3251787 system_pods.go:89] "coredns-66bc5c9577-27txb" [fd4c7d1c-7df0-4c2d-8e3e-25771928bd9d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:10:56.887827 3251787 system_pods.go:89] "etcd-calico-348887" [ca2c9511-f410-4c47-854e-611e0b376a33] Running
	I1217 12:10:56.887833 3251787 system_pods.go:89] "kube-apiserver-calico-348887" [b11afc9c-3715-4a3a-ad87-756a53087fcc] Running
	I1217 12:10:56.887837 3251787 system_pods.go:89] "kube-controller-manager-calico-348887" [276e14c8-734f-4a86-bd8d-bd98ba46bdc3] Running
	I1217 12:10:56.887841 3251787 system_pods.go:89] "kube-proxy-2rlcl" [e56a80a9-f45d-4315-ad7c-b22700828b5d] Running
	I1217 12:10:56.887850 3251787 system_pods.go:89] "kube-scheduler-calico-348887" [a360243f-8b44-40f7-bed7-2c6485405f09] Running
	I1217 12:10:56.887854 3251787 system_pods.go:89] "storage-provisioner" [b2076b19-9fc7-44d9-a9a0-7b54420b9f79] Running
	I1217 12:10:56.887870 3251787 retry.go:31] will retry after 2.1398893s: missing components: kube-dns
	I1217 12:10:59.047006 3251787 system_pods.go:86] 9 kube-system pods found
	I1217 12:10:59.047039 3251787 system_pods.go:89] "calico-kube-controllers-5c676f698c-njklq" [ec268678-ffdf-4319-9bbd-e0c9012e6f41] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 12:10:59.047049 3251787 system_pods.go:89] "calico-node-x5xh8" [bd2824c5-0164-45d4-9070-4991ccae38c6] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 12:10:59.047057 3251787 system_pods.go:89] "coredns-66bc5c9577-27txb" [fd4c7d1c-7df0-4c2d-8e3e-25771928bd9d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:10:59.047061 3251787 system_pods.go:89] "etcd-calico-348887" [ca2c9511-f410-4c47-854e-611e0b376a33] Running
	I1217 12:10:59.047067 3251787 system_pods.go:89] "kube-apiserver-calico-348887" [b11afc9c-3715-4a3a-ad87-756a53087fcc] Running
	I1217 12:10:59.047071 3251787 system_pods.go:89] "kube-controller-manager-calico-348887" [276e14c8-734f-4a86-bd8d-bd98ba46bdc3] Running
	I1217 12:10:59.047075 3251787 system_pods.go:89] "kube-proxy-2rlcl" [e56a80a9-f45d-4315-ad7c-b22700828b5d] Running
	I1217 12:10:59.047079 3251787 system_pods.go:89] "kube-scheduler-calico-348887" [a360243f-8b44-40f7-bed7-2c6485405f09] Running
	I1217 12:10:59.047083 3251787 system_pods.go:89] "storage-provisioner" [b2076b19-9fc7-44d9-a9a0-7b54420b9f79] Running
	I1217 12:10:59.047097 3251787 retry.go:31] will retry after 2.479872192s: missing components: kube-dns
	I1217 12:11:01.530780 3251787 system_pods.go:86] 9 kube-system pods found
	I1217 12:11:01.530818 3251787 system_pods.go:89] "calico-kube-controllers-5c676f698c-njklq" [ec268678-ffdf-4319-9bbd-e0c9012e6f41] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 12:11:01.530828 3251787 system_pods.go:89] "calico-node-x5xh8" [bd2824c5-0164-45d4-9070-4991ccae38c6] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 12:11:01.530835 3251787 system_pods.go:89] "coredns-66bc5c9577-27txb" [fd4c7d1c-7df0-4c2d-8e3e-25771928bd9d] Running
	I1217 12:11:01.530841 3251787 system_pods.go:89] "etcd-calico-348887" [ca2c9511-f410-4c47-854e-611e0b376a33] Running
	I1217 12:11:01.530846 3251787 system_pods.go:89] "kube-apiserver-calico-348887" [b11afc9c-3715-4a3a-ad87-756a53087fcc] Running
	I1217 12:11:01.530850 3251787 system_pods.go:89] "kube-controller-manager-calico-348887" [276e14c8-734f-4a86-bd8d-bd98ba46bdc3] Running
	I1217 12:11:01.530854 3251787 system_pods.go:89] "kube-proxy-2rlcl" [e56a80a9-f45d-4315-ad7c-b22700828b5d] Running
	I1217 12:11:01.530858 3251787 system_pods.go:89] "kube-scheduler-calico-348887" [a360243f-8b44-40f7-bed7-2c6485405f09] Running
	I1217 12:11:01.530862 3251787 system_pods.go:89] "storage-provisioner" [b2076b19-9fc7-44d9-a9a0-7b54420b9f79] Running
	I1217 12:11:01.530872 3251787 system_pods.go:126] duration metric: took 13.801033521s to wait for k8s-apps to be running ...
	I1217 12:11:01.530884 3251787 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 12:11:01.530941 3251787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 12:11:01.571341 3251787 system_svc.go:56] duration metric: took 40.448074ms WaitForService to wait for kubelet
	I1217 12:11:01.571374 3251787 kubeadm.go:587] duration metric: took 21.060358444s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 12:11:01.571393 3251787 node_conditions.go:102] verifying NodePressure condition ...
	I1217 12:11:01.574497 3251787 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 12:11:01.574531 3251787 node_conditions.go:123] node cpu capacity is 2
	I1217 12:11:01.574546 3251787 node_conditions.go:105] duration metric: took 3.147973ms to run NodePressure ...
	I1217 12:11:01.574560 3251787 start.go:242] waiting for startup goroutines ...
	I1217 12:11:01.574567 3251787 start.go:247] waiting for cluster config update ...
	I1217 12:11:01.574579 3251787 start.go:256] writing updated cluster config ...
	I1217 12:11:01.574866 3251787 ssh_runner.go:195] Run: rm -f paused
	I1217 12:11:01.579410 3251787 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 12:11:01.582921 3251787 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-27txb" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 12:11:01.587752 3251787 pod_ready.go:94] pod "coredns-66bc5c9577-27txb" is "Ready"
	I1217 12:11:01.587782 3251787 pod_ready.go:86] duration metric: took 4.833387ms for pod "coredns-66bc5c9577-27txb" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 12:11:01.590036 3251787 pod_ready.go:83] waiting for pod "etcd-calico-348887" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 12:11:01.594815 3251787 pod_ready.go:94] pod "etcd-calico-348887" is "Ready"
	I1217 12:11:01.594884 3251787 pod_ready.go:86] duration metric: took 4.778725ms for pod "etcd-calico-348887" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 12:11:01.597651 3251787 pod_ready.go:83] waiting for pod "kube-apiserver-calico-348887" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 12:11:01.602826 3251787 pod_ready.go:94] pod "kube-apiserver-calico-348887" is "Ready"
	I1217 12:11:01.602853 3251787 pod_ready.go:86] duration metric: took 5.174213ms for pod "kube-apiserver-calico-348887" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 12:11:01.605476 3251787 pod_ready.go:83] waiting for pod "kube-controller-manager-calico-348887" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 12:11:01.986279 3251787 pod_ready.go:94] pod "kube-controller-manager-calico-348887" is "Ready"
	I1217 12:11:01.986305 3251787 pod_ready.go:86] duration metric: took 380.802469ms for pod "kube-controller-manager-calico-348887" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 12:11:02.183528 3251787 pod_ready.go:83] waiting for pod "kube-proxy-2rlcl" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 12:11:02.583521 3251787 pod_ready.go:94] pod "kube-proxy-2rlcl" is "Ready"
	I1217 12:11:02.583547 3251787 pod_ready.go:86] duration metric: took 399.992987ms for pod "kube-proxy-2rlcl" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 12:11:02.787324 3251787 pod_ready.go:83] waiting for pod "kube-scheduler-calico-348887" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 12:11:03.183145 3251787 pod_ready.go:94] pod "kube-scheduler-calico-348887" is "Ready"
	I1217 12:11:03.183173 3251787 pod_ready.go:86] duration metric: took 395.8185ms for pod "kube-scheduler-calico-348887" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 12:11:03.183185 3251787 pod_ready.go:40] duration metric: took 1.603735051s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 12:11:03.239049 3251787 start.go:625] kubectl: 1.33.2, cluster: 1.34.3 (minor skew: 1)
	I1217 12:11:03.242305 3251787 out.go:179] * Done! kubectl is now configured to use "calico-348887" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511207273Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511268646Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511382582Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511463852Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511528459Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511597192Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511654275Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511737137Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511807372Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511906391Z" level=info msg="Connect containerd service"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.512274624Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.513135250Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.526293232Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.526625018Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.526753222Z" level=info msg="Start subscribing containerd event"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.526875034Z" level=info msg="Start recovering state"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.563780213Z" level=info msg="Start event monitor"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.563957803Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.564027291Z" level=info msg="Start streaming server"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.564090232Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.564145632Z" level=info msg="runtime interface starting up..."
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.564203560Z" level=info msg="starting plugins..."
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.564286234Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 11:56:00 no-preload-118262 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.567526269Z" level=info msg="containerd successfully booted in 0.088039s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:11:06.111457    8190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:11:06.111871    8190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:11:06.114185    8190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:11:06.114548    8190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:11:06.116052    8190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 09:29] overlayfs: idmapped layers are currently not supported
	[Dec17 09:31] overlayfs: idmapped layers are currently not supported
	[Dec17 09:41] overlayfs: idmapped layers are currently not supported
	[Dec17 09:43] overlayfs: idmapped layers are currently not supported
	[Dec17 09:44] overlayfs: idmapped layers are currently not supported
	[  +5.066669] overlayfs: idmapped layers are currently not supported
	[ +38.827173] overlayfs: idmapped layers are currently not supported
	[Dec17 09:45] overlayfs: idmapped layers are currently not supported
	[Dec17 09:46] overlayfs: idmapped layers are currently not supported
	[Dec17 09:48] overlayfs: idmapped layers are currently not supported
	[  +5.468161] overlayfs: idmapped layers are currently not supported
	[Dec17 09:49] overlayfs: idmapped layers are currently not supported
	[  +4.263444] overlayfs: idmapped layers are currently not supported
	[Dec17 09:50] overlayfs: idmapped layers are currently not supported
	[Dec17 10:07] overlayfs: idmapped layers are currently not supported
	[Dec17 10:08] overlayfs: idmapped layers are currently not supported
	[Dec17 10:10] overlayfs: idmapped layers are currently not supported
	[Dec17 10:11] overlayfs: idmapped layers are currently not supported
	[Dec17 10:13] overlayfs: idmapped layers are currently not supported
	[Dec17 10:15] overlayfs: idmapped layers are currently not supported
	[Dec17 10:16] overlayfs: idmapped layers are currently not supported
	[Dec17 10:21] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 12:11:06 up 17:53,  0 user,  load average: 2.85, 1.73, 1.42
	Linux no-preload-118262 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 12:11:02 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:11:03 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1202.
	Dec 17 12:11:03 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:11:03 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:11:03 no-preload-118262 kubelet[8049]: E1217 12:11:03.547978    8049 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:11:03 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:11:03 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:11:04 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1203.
	Dec 17 12:11:04 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:11:04 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:11:04 no-preload-118262 kubelet[8055]: E1217 12:11:04.308088    8055 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:11:04 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:11:04 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:11:04 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1204.
	Dec 17 12:11:04 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:11:04 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:11:05 no-preload-118262 kubelet[8073]: E1217 12:11:05.002832    8073 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:11:05 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:11:05 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:11:05 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1205.
	Dec 17 12:11:05 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:11:05 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:11:05 no-preload-118262 kubelet[8117]: E1217 12:11:05.826708    8117 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:11:05 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:11:05 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-118262 -n no-preload-118262
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-118262 -n no-preload-118262: exit status 2 (370.562721ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-118262" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (541.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (9.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-669680 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-669680 -n newest-cni-669680
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-669680 -n newest-cni-669680: exit status 2 (323.17788ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-669680 -n newest-cni-669680
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-669680 -n newest-cni-669680: exit status 2 (308.126214ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-669680 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-669680 -n newest-cni-669680
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-669680 -n newest-cni-669680: exit status 2 (321.908807ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-669680 -n newest-cni-669680
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-669680 -n newest-cni-669680: exit status 2 (307.722806ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-669680
helpers_test.go:244: (dbg) docker inspect newest-cni-669680:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc",
	        "Created": "2025-12-17T11:50:38.904543162Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3219980,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T12:00:44.656180291Z",
	            "FinishedAt": "2025-12-17T12:00:43.27484179Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc/hostname",
	        "HostsPath": "/var/lib/docker/containers/23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc/hosts",
	        "LogPath": "/var/lib/docker/containers/23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc/23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc-json.log",
	        "Name": "/newest-cni-669680",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-669680:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-669680",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc",
	                "LowerDir": "/var/lib/docker/overlay2/0a323466537bd2b84d1c1de1e658b41a849cb439639dbed3e328ce82ab171fd3-init/diff:/var/lib/docker/overlay2/aa1c3cb837db05afa9c265c464cc269fa9c11658f422c1c8858e1287ac952f12/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0a323466537bd2b84d1c1de1e658b41a849cb439639dbed3e328ce82ab171fd3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0a323466537bd2b84d1c1de1e658b41a849cb439639dbed3e328ce82ab171fd3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0a323466537bd2b84d1c1de1e658b41a849cb439639dbed3e328ce82ab171fd3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-669680",
	                "Source": "/var/lib/docker/volumes/newest-cni-669680/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-669680",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-669680",
	                "name.minikube.sigs.k8s.io": "newest-cni-669680",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9f695758c865267c895635ea7898bf1b9d81e4dd5864219138eceead759e9a1b",
	            "SandboxKey": "/var/run/docker/netns/9f695758c865",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-669680": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:62:0f:03:13:0e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e84740d61c89f51b13c32d88b9c5aafc9e8e1ba5e275e3db72c9a38077e44a94",
	                    "EndpointID": "b90d44188d07afa11a62007f533d5391259eb969677e3f00be6723f39985284a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-669680",
	                        "23474ef32ddb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-669680 -n newest-cni-669680
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-669680 -n newest-cni-669680: exit status 2 (321.466557ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-669680 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-669680 logs -n 25: (1.551180936s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ delete  │ -p embed-certs-628462                                                                                                                                                                                                                                    │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p embed-certs-628462                                                                                                                                                                                                                                    │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p disable-driver-mounts-003095                                                                                                                                                                                                                          │ disable-driver-mounts-003095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ start   │ -p default-k8s-diff-port-224095 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:49 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-224095 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ stop    │ -p default-k8s-diff-port-224095 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-224095 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ start   │ -p default-k8s-diff-port-224095 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:50 UTC │
	│ image   │ default-k8s-diff-port-224095 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ pause   │ -p default-k8s-diff-port-224095 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ unpause │ -p default-k8s-diff-port-224095 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ delete  │ -p default-k8s-diff-port-224095                                                                                                                                                                                                                          │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ delete  │ -p default-k8s-diff-port-224095                                                                                                                                                                                                                          │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ start   │ -p newest-cni-669680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-118262 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	│ stop    │ -p no-preload-118262 --alsologtostderr -v=3                                                                                                                                                                                                              │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ addons  │ enable dashboard -p no-preload-118262 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ start   │ -p no-preload-118262 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-669680 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 11:58 UTC │                     │
	│ stop    │ -p newest-cni-669680 --alsologtostderr -v=3                                                                                                                                                                                                              │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 12:00 UTC │ 17 Dec 25 12:00 UTC │
	│ addons  │ enable dashboard -p newest-cni-669680 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 12:00 UTC │ 17 Dec 25 12:00 UTC │
	│ start   │ -p newest-cni-669680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 12:00 UTC │                     │
	│ image   │ newest-cni-669680 image list --format=json                                                                                                                                                                                                               │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 12:06 UTC │ 17 Dec 25 12:06 UTC │
	│ pause   │ -p newest-cni-669680 --alsologtostderr -v=1                                                                                                                                                                                                              │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 12:06 UTC │ 17 Dec 25 12:07 UTC │
	│ unpause │ -p newest-cni-669680 --alsologtostderr -v=1                                                                                                                                                                                                              │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 12:07 UTC │ 17 Dec 25 12:07 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 12:00:44
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 12:00:44.347526 3219848 out.go:360] Setting OutFile to fd 1 ...
	I1217 12:00:44.347663 3219848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:00:44.347673 3219848 out.go:374] Setting ErrFile to fd 2...
	I1217 12:00:44.347678 3219848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:00:44.347938 3219848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 12:00:44.348321 3219848 out.go:368] Setting JSON to false
	I1217 12:00:44.349222 3219848 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":63795,"bootTime":1765909050,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 12:00:44.349300 3219848 start.go:143] virtualization:  
	I1217 12:00:44.352466 3219848 out.go:179] * [newest-cni-669680] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 12:00:44.356190 3219848 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 12:00:44.356282 3219848 notify.go:221] Checking for updates...
	I1217 12:00:44.362135 3219848 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 12:00:44.365177 3219848 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 12:00:44.368881 3219848 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 12:00:44.372015 3219848 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 12:00:44.375014 3219848 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 12:00:44.378336 3219848 config.go:182] Loaded profile config "newest-cni-669680": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 12:00:44.378951 3219848 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 12:00:44.413369 3219848 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 12:00:44.413513 3219848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 12:00:44.473970 3219848 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 12:00:44.464532408 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 12:00:44.474081 3219848 docker.go:319] overlay module found
	I1217 12:00:44.477205 3219848 out.go:179] * Using the docker driver based on existing profile
	I1217 12:00:44.480155 3219848 start.go:309] selected driver: docker
	I1217 12:00:44.480182 3219848 start.go:927] validating driver "docker" against &{Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:00:44.480300 3219848 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 12:00:44.481122 3219848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 12:00:44.568687 3219848 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 12:00:44.559079636 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 12:00:44.569054 3219848 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 12:00:44.569088 3219848 cni.go:84] Creating CNI manager for ""
	I1217 12:00:44.569145 3219848 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 12:00:44.569196 3219848 start.go:353] cluster config:
	{Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:00:44.574245 3219848 out.go:179] * Starting "newest-cni-669680" primary control-plane node in "newest-cni-669680" cluster
	I1217 12:00:44.576964 3219848 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 12:00:44.579814 3219848 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 12:00:44.582545 3219848 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 12:00:44.582593 3219848 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1217 12:00:44.582604 3219848 cache.go:65] Caching tarball of preloaded images
	I1217 12:00:44.582624 3219848 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 12:00:44.582700 3219848 preload.go:238] Found /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 12:00:44.582711 3219848 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1217 12:00:44.582826 3219848 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/config.json ...
	I1217 12:00:44.602190 3219848 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 12:00:44.602216 3219848 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 12:00:44.602262 3219848 cache.go:243] Successfully downloaded all kic artifacts
	I1217 12:00:44.602326 3219848 start.go:360] acquireMachinesLock for newest-cni-669680: {Name:mk48c8383b245a4b70f2208fe2e76b80693bbb09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 12:00:44.602428 3219848 start.go:364] duration metric: took 68.29µs to acquireMachinesLock for "newest-cni-669680"
	I1217 12:00:44.602457 3219848 start.go:96] Skipping create...Using existing machine configuration
	I1217 12:00:44.602505 3219848 fix.go:54] fixHost starting: 
	I1217 12:00:44.602917 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:44.620734 3219848 fix.go:112] recreateIfNeeded on newest-cni-669680: state=Stopped err=<nil>
	W1217 12:00:44.620765 3219848 fix.go:138] unexpected machine state, will restart: <nil>
	W1217 12:00:44.760258 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:46.760539 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:00:44.623987 3219848 out.go:252] * Restarting existing docker container for "newest-cni-669680" ...
	I1217 12:00:44.624072 3219848 cli_runner.go:164] Run: docker start newest-cni-669680
	I1217 12:00:44.870900 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:44.893559 3219848 kic.go:432] container "newest-cni-669680" state is running.
	I1217 12:00:44.894282 3219848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 12:00:44.917205 3219848 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/config.json ...
	I1217 12:00:44.917570 3219848 machine.go:94] provisionDockerMachine start ...
	I1217 12:00:44.917645 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:44.945980 3219848 main.go:143] libmachine: Using SSH client type: native
	I1217 12:00:44.946096 3219848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36053 <nil> <nil>}
	I1217 12:00:44.946104 3219848 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 12:00:44.946864 3219848 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 12:00:48.084367 3219848 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-669680
	
	I1217 12:00:48.084399 3219848 ubuntu.go:182] provisioning hostname "newest-cni-669680"
	I1217 12:00:48.084507 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:48.104367 3219848 main.go:143] libmachine: Using SSH client type: native
	I1217 12:00:48.104656 3219848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36053 <nil> <nil>}
	I1217 12:00:48.104680 3219848 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-669680 && echo "newest-cni-669680" | sudo tee /etc/hostname
	I1217 12:00:48.247265 3219848 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-669680
	
	I1217 12:00:48.247353 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:48.270652 3219848 main.go:143] libmachine: Using SSH client type: native
	I1217 12:00:48.270788 3219848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36053 <nil> <nil>}
	I1217 12:00:48.270817 3219848 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-669680' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-669680/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-669680' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 12:00:48.417473 3219848 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 12:00:48.417557 3219848 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 12:00:48.417596 3219848 ubuntu.go:190] setting up certificates
	I1217 12:00:48.417639 3219848 provision.go:84] configureAuth start
	I1217 12:00:48.417749 3219848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 12:00:48.437471 3219848 provision.go:143] copyHostCerts
	I1217 12:00:48.437568 3219848 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 12:00:48.437587 3219848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 12:00:48.437717 3219848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 12:00:48.437858 3219848 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 12:00:48.437877 3219848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 12:00:48.437916 3219848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 12:00:48.438005 3219848 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 12:00:48.438028 3219848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 12:00:48.438055 3219848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 12:00:48.438157 3219848 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.newest-cni-669680 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-669680]
	I1217 12:00:48.577436 3219848 provision.go:177] copyRemoteCerts
	I1217 12:00:48.577506 3219848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 12:00:48.577546 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:48.595338 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:48.692538 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 12:00:48.711734 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 12:00:48.729881 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 12:00:48.748237 3219848 provision.go:87] duration metric: took 330.555362ms to configureAuth
	I1217 12:00:48.748262 3219848 ubuntu.go:206] setting minikube options for container-runtime
	I1217 12:00:48.748550 3219848 config.go:182] Loaded profile config "newest-cni-669680": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 12:00:48.748561 3219848 machine.go:97] duration metric: took 3.830976751s to provisionDockerMachine
	I1217 12:00:48.748569 3219848 start.go:293] postStartSetup for "newest-cni-669680" (driver="docker")
	I1217 12:00:48.748581 3219848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 12:00:48.748643 3219848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 12:00:48.748683 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:48.766578 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:48.864654 3219848 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 12:00:48.868220 3219848 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 12:00:48.868249 3219848 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 12:00:48.868261 3219848 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 12:00:48.868318 3219848 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 12:00:48.868401 3219848 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 12:00:48.868523 3219848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 12:00:48.876210 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 12:00:48.894408 3219848 start.go:296] duration metric: took 145.823675ms for postStartSetup
	I1217 12:00:48.894507 3219848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 12:00:48.894563 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:48.913872 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:49.010734 3219848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 12:00:49.017136 3219848 fix.go:56] duration metric: took 4.414624566s for fixHost
	I1217 12:00:49.017182 3219848 start.go:83] releasing machines lock for "newest-cni-669680", held for 4.414721098s
	I1217 12:00:49.017319 3219848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 12:00:49.041576 3219848 ssh_runner.go:195] Run: cat /version.json
	I1217 12:00:49.041642 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:49.041898 3219848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 12:00:49.041972 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:49.071567 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:49.072178 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:49.261249 3219848 ssh_runner.go:195] Run: systemctl --version
	I1217 12:00:49.267897 3219848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 12:00:49.272503 3219848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 12:00:49.272574 3219848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 12:00:49.280715 3219848 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 12:00:49.280743 3219848 start.go:496] detecting cgroup driver to use...
	I1217 12:00:49.280787 3219848 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 12:00:49.280844 3219848 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 12:00:49.298858 3219848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 12:00:49.313120 3219848 docker.go:218] disabling cri-docker service (if available) ...
	I1217 12:00:49.313230 3219848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 12:00:49.329245 3219848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 12:00:49.342531 3219848 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 12:00:49.461223 3219848 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 12:00:49.579409 3219848 docker.go:234] disabling docker service ...
	I1217 12:00:49.579510 3219848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 12:00:49.594800 3219848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 12:00:49.608313 3219848 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 12:00:49.737460 3219848 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 12:00:49.883222 3219848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 12:00:49.897339 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 12:00:49.911914 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 12:00:49.921268 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 12:00:49.930257 3219848 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 12:00:49.930398 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 12:00:49.939639 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 12:00:49.948689 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 12:00:49.958342 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 12:00:49.967395 3219848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 12:00:49.975730 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 12:00:49.984582 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 12:00:49.993553 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 12:00:50.009983 3219848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 12:00:50.019753 3219848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 12:00:50.028837 3219848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:00:50.142686 3219848 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 12:00:50.264183 3219848 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 12:00:50.264308 3219848 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 12:00:50.268160 3219848 start.go:564] Will wait 60s for crictl version
	I1217 12:00:50.268261 3219848 ssh_runner.go:195] Run: which crictl
	I1217 12:00:50.271790 3219848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 12:00:50.298148 3219848 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 12:00:50.298258 3219848 ssh_runner.go:195] Run: containerd --version
	I1217 12:00:50.318643 3219848 ssh_runner.go:195] Run: containerd --version
	I1217 12:00:50.346609 3219848 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1217 12:00:50.349545 3219848 cli_runner.go:164] Run: docker network inspect newest-cni-669680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 12:00:50.366603 3219848 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 12:00:50.370482 3219848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 12:00:50.383622 3219848 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 12:00:50.386526 3219848 kubeadm.go:884] updating cluster {Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 12:00:50.386672 3219848 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 12:00:50.386774 3219848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:00:50.415106 3219848 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 12:00:50.415132 3219848 containerd.go:534] Images already preloaded, skipping extraction
	I1217 12:00:50.415224 3219848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:00:50.444492 3219848 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 12:00:50.444517 3219848 cache_images.go:86] Images are preloaded, skipping loading
	I1217 12:00:50.444526 3219848 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1217 12:00:50.444639 3219848 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-669680 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 12:00:50.444718 3219848 ssh_runner.go:195] Run: sudo crictl info
	I1217 12:00:50.471453 3219848 cni.go:84] Creating CNI manager for ""
	I1217 12:00:50.471478 3219848 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 12:00:50.471497 3219848 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 12:00:50.471553 3219848 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-669680 NodeName:newest-cni-669680 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 12:00:50.471711 3219848 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-669680"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 12:00:50.471828 3219848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 12:00:50.480867 3219848 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 12:00:50.480998 3219848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 12:00:50.488686 3219848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1217 12:00:50.504356 3219848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 12:00:50.520176 3219848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1217 12:00:50.535930 3219848 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 12:00:50.540134 3219848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 12:00:50.550629 3219848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:00:50.669384 3219848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 12:00:50.685420 3219848 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680 for IP: 192.168.76.2
	I1217 12:00:50.685479 3219848 certs.go:195] generating shared ca certs ...
	I1217 12:00:50.685497 3219848 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:00:50.685634 3219848 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 12:00:50.685683 3219848 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 12:00:50.685690 3219848 certs.go:257] generating profile certs ...
	I1217 12:00:50.685787 3219848 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.key
	I1217 12:00:50.685851 3219848 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key.e7646161
	I1217 12:00:50.685893 3219848 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key
	I1217 12:00:50.686084 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 12:00:50.686149 3219848 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 12:00:50.686177 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 12:00:50.686225 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 12:00:50.686286 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 12:00:50.686340 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 12:00:50.686422 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 12:00:50.687047 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 12:00:50.710384 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 12:00:50.730920 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 12:00:50.751265 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 12:00:50.772018 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 12:00:50.790833 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 12:00:50.810114 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 12:00:50.828402 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 12:00:50.846753 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 12:00:50.865705 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 12:00:50.886567 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 12:00:50.904533 3219848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 12:00:50.917457 3219848 ssh_runner.go:195] Run: openssl version
	I1217 12:00:50.923993 3219848 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:00:50.931839 3219848 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 12:00:50.939507 3219848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:00:50.943237 3219848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:00:50.943304 3219848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:00:50.984637 3219848 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 12:00:50.992168 3219848 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 12:00:50.999795 3219848 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 12:00:51.020372 3219848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 12:00:51.024379 3219848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 12:00:51.024566 3219848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 12:00:51.066006 3219848 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 12:00:51.074211 3219848 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 12:00:51.082049 3219848 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 12:00:51.090651 3219848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 12:00:51.094888 3219848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 12:00:51.095004 3219848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 12:00:51.137313 3219848 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 12:00:51.145186 3219848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 12:00:51.149385 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 12:00:51.191456 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 12:00:51.232840 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 12:00:51.275219 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 12:00:51.317313 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 12:00:51.358746 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 12:00:51.399851 3219848 kubeadm.go:401] StartCluster: {Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:00:51.399946 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 12:00:51.400058 3219848 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 12:00:51.427405 3219848 cri.go:89] found id: ""
	I1217 12:00:51.427480 3219848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 12:00:51.435564 3219848 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 12:00:51.435593 3219848 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 12:00:51.435648 3219848 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 12:00:51.443379 3219848 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 12:00:51.443986 3219848 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-669680" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 12:00:51.444236 3219848 kubeconfig.go:62] /home/jenkins/minikube-integration/22182-2922712/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-669680" cluster setting kubeconfig missing "newest-cni-669680" context setting]
	I1217 12:00:51.444696 3219848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:00:51.446096 3219848 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 12:00:51.454141 3219848 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1217 12:00:51.454214 3219848 kubeadm.go:602] duration metric: took 18.613293ms to restartPrimaryControlPlane
	I1217 12:00:51.454230 3219848 kubeadm.go:403] duration metric: took 54.392206ms to StartCluster
	I1217 12:00:51.454245 3219848 settings.go:142] acquiring lock: {Name:mkbe14d68dd5ff3fa1749157c50305d115f5a24d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:00:51.454304 3219848 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 12:00:51.455245 3219848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:00:51.455481 3219848 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 12:00:51.455797 3219848 config.go:182] Loaded profile config "newest-cni-669680": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 12:00:51.455846 3219848 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 12:00:51.455911 3219848 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-669680"
	I1217 12:00:51.455924 3219848 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-669680"
	I1217 12:00:51.455953 3219848 host.go:66] Checking if "newest-cni-669680" exists ...
	I1217 12:00:51.456410 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:51.456591 3219848 addons.go:70] Setting dashboard=true in profile "newest-cni-669680"
	I1217 12:00:51.457002 3219848 addons.go:239] Setting addon dashboard=true in "newest-cni-669680"
	W1217 12:00:51.457012 3219848 addons.go:248] addon dashboard should already be in state true
	I1217 12:00:51.457034 3219848 host.go:66] Checking if "newest-cni-669680" exists ...
	I1217 12:00:51.457458 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:51.456605 3219848 addons.go:70] Setting default-storageclass=true in profile "newest-cni-669680"
	I1217 12:00:51.458033 3219848 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-669680"
	I1217 12:00:51.458306 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:51.460659 3219848 out.go:179] * Verifying Kubernetes components...
	I1217 12:00:51.463611 3219848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:00:51.495379 3219848 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 12:00:51.502753 3219848 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:00:51.502777 3219848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 12:00:51.502845 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:51.511997 3219848 addons.go:239] Setting addon default-storageclass=true in "newest-cni-669680"
	I1217 12:00:51.512038 3219848 host.go:66] Checking if "newest-cni-669680" exists ...
	I1217 12:00:51.512543 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:51.527586 3219848 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 12:00:51.536600 3219848 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1217 12:00:49.260592 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:51.760613 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:00:51.539513 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 12:00:51.539539 3219848 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 12:00:51.539612 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:51.555471 3219848 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 12:00:51.555502 3219848 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 12:00:51.555570 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:51.569622 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:51.592016 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:51.601832 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:51.689678 3219848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 12:00:51.731294 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:00:51.749491 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:00:51.814469 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 12:00:51.814496 3219848 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 12:00:51.839602 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 12:00:51.839672 3219848 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 12:00:51.852764 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 12:00:51.852827 3219848 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 12:00:51.865089 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 12:00:51.865152 3219848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 12:00:51.878190 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 12:00:51.878259 3219848 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 12:00:51.890831 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 12:00:51.890854 3219848 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 12:00:51.903270 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 12:00:51.903294 3219848 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 12:00:51.916127 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 12:00:51.916153 3219848 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 12:00:51.929059 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 12:00:51.929123 3219848 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 12:00:51.942273 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:52.502896 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.502968 3219848 retry.go:31] will retry after 269.884821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:00:52.503026 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.503067 3219848 retry.go:31] will retry after 319.702383ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.503040 3219848 api_server.go:52] waiting for apiserver process to appear ...
	I1217 12:00:52.503258 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:00:52.503300 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.503321 3219848 retry.go:31] will retry after 196.810414ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.700893 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:52.770562 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.770599 3219848 retry.go:31] will retry after 481.518663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.773838 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:00:52.823221 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:00:52.855276 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.855328 3219848 retry.go:31] will retry after 391.667259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:00:52.894877 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.894917 3219848 retry.go:31] will retry after 200.928151ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.004579 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:53.096394 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:00:53.155868 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.155897 3219848 retry.go:31] will retry after 564.238822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.248228 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:00:53.253066 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:53.368787 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.368822 3219848 retry.go:31] will retry after 377.070742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:00:53.369052 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.369071 3219848 retry.go:31] will retry after 485.691157ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.504052 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:53.720468 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:00:53.746162 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:00:53.794993 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.795027 3219848 retry.go:31] will retry after 872.052872ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:00:53.811480 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.811533 3219848 retry.go:31] will retry after 558.92589ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.855758 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:53.922708 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.922745 3219848 retry.go:31] will retry after 803.451465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:54.003704 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:00:54.260476 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:56.760549 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:00:54.370776 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:00:54.437621 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:54.437652 3219848 retry.go:31] will retry after 1.190014231s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:54.503835 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:54.667963 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:00:54.726498 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:54.728210 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:54.728287 3219848 retry.go:31] will retry after 1.413986656s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:00:54.813279 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:54.813372 3219848 retry.go:31] will retry after 1.840693776s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:55.005986 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:55.504112 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:55.628242 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:00:55.689054 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:55.689136 3219848 retry.go:31] will retry after 1.799425819s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:56.003624 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:56.142943 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:00:56.205592 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:56.205625 3219848 retry.go:31] will retry after 2.655712888s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:56.503981 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:56.654730 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:56.717604 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:56.717641 3219848 retry.go:31] will retry after 1.909418395s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:57.004223 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:57.489437 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:00:57.503984 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:00:57.562808 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:57.562840 3219848 retry.go:31] will retry after 3.72719526s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:58.014740 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:58.503409 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:58.627253 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:58.690443 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:58.690481 3219848 retry.go:31] will retry after 3.549926007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:58.861704 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:00:58.923654 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:58.923683 3219848 retry.go:31] will retry after 2.058003245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:59.003967 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:00:59.260028 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:01.761273 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:00:59.504167 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:00.018808 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:00.504031 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:00.982724 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:01:01.004335 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:01.111365 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:01.111399 3219848 retry.go:31] will retry after 3.900095446s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:01.291002 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:01:01.368946 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:01.368996 3219848 retry.go:31] will retry after 3.675584678s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:01.503381 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:02.004403 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:02.241403 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:01:02.307939 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:02.307978 3219848 retry.go:31] will retry after 5.738469139s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:02.504084 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:03.003562 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:03.503472 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:04.005140 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:04.259626 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:06.260640 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:08.759809 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:04.503830 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:05.003702 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:05.012660 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:01:05.045335 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:01:05.083423 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:05.083461 3219848 retry.go:31] will retry after 9.235586003s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:01:05.118369 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:05.118401 3219848 retry.go:31] will retry after 3.828272571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:05.503857 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:06.003637 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:06.504078 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:07.003401 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:07.503344 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:08.004170 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:08.047658 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:01:08.113675 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:08.113710 3219848 retry.go:31] will retry after 7.390134832s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:08.504355 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:08.946950 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:01:09.003509 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:09.011595 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:09.011629 3219848 retry.go:31] will retry after 14.170665244s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:01:11.259781 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:13.760361 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:09.503956 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:10.018957 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:10.503456 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:11.004169 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:11.503808 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:12.003522 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:12.503603 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:13.003862 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:13.503472 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:14.004363 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:14.319308 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:01:16.260406 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:18.759622 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:14.385208 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:14.385243 3219848 retry.go:31] will retry after 5.459360953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:14.503378 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:15.006355 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:15.504086 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:15.504108 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:01:15.572879 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:15.572915 3219848 retry.go:31] will retry after 11.777794795s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:16.005530 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:16.503503 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:17.003649 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:17.503430 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:18.005004 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:18.504088 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:19.003423 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:20.760668 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:23.259693 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:19.503667 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:19.845708 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:01:19.909350 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:19.909381 3219848 retry.go:31] will retry after 9.722081791s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:20.003736 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:20.503967 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:21.004457 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:21.504148 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:22.003426 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:22.504235 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:23.004166 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:23.183313 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:01:23.244255 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:23.244289 3219848 retry.go:31] will retry after 19.619062537s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:23.503427 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:24.006966 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:25.259753 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:27.759647 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:24.503758 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:25.004125 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:25.503463 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:26.004155 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:26.504576 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:27.003556 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:27.351598 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:01:27.419162 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:27.419195 3219848 retry.go:31] will retry after 15.164194741s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:27.503619 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:28.003385 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:28.503474 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:29.004314 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:29.760524 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:32.259673 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:29.503968 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:29.632290 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:01:29.699987 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:29.700018 3219848 retry.go:31] will retry after 12.658501476s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:30.003430 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:30.503407 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:31.003818 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:31.504094 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:32.003845 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:32.503410 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:33.005413 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:33.503962 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:34.003405 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:34.259722 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:36.759694 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:34.503770 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:35.004969 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:35.504211 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:36.003492 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:36.503881 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:37.008063 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:37.504267 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:38.004154 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:38.504195 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:39.005022 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:39.260642 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:41.759666 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:39.504074 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:40.009459 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:40.504054 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:41.004134 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:41.504134 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:42.003867 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:42.359033 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:01:42.424319 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:42.424350 3219848 retry.go:31] will retry after 39.499798177s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:42.503565 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:42.584549 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:01:42.654579 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:42.654612 3219848 retry.go:31] will retry after 22.182784721s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:42.864124 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:01:42.925874 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:42.925916 3219848 retry.go:31] will retry after 18.241160237s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:43.004102 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:43.504356 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:44.004028 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:44.259623 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:46.260805 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:48.760674 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:44.503929 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:45.003640 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:45.503747 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:46.003443 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:46.503967 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:47.003372 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:47.503601 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:48.003536 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:48.503987 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:49.003434 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:51.260164 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:53.759783 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:49.504162 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:50.003493 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:50.503875 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:51.004324 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:51.503888 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:01:51.503983 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:01:51.536666 3219848 cri.go:89] found id: ""
	I1217 12:01:51.536689 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.536698 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:01:51.536704 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:01:51.536768 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:01:51.562047 3219848 cri.go:89] found id: ""
	I1217 12:01:51.562070 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.562078 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:01:51.562084 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:01:51.562149 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:01:51.586286 3219848 cri.go:89] found id: ""
	I1217 12:01:51.586309 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.586317 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:01:51.586323 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:01:51.586381 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:01:51.611834 3219848 cri.go:89] found id: ""
	I1217 12:01:51.611858 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.611867 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:01:51.611873 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:01:51.611942 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:01:51.637620 3219848 cri.go:89] found id: ""
	I1217 12:01:51.637643 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.637651 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:01:51.637658 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:01:51.637715 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:01:51.663176 3219848 cri.go:89] found id: ""
	I1217 12:01:51.663198 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.663206 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:01:51.663212 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:01:51.663273 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:01:51.688038 3219848 cri.go:89] found id: ""
	I1217 12:01:51.688064 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.688083 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:01:51.688090 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:01:51.688159 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:01:51.715834 3219848 cri.go:89] found id: ""
	I1217 12:01:51.715860 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.715870 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:01:51.715879 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:01:51.715890 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:01:51.772533 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:01:51.772567 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:01:51.788370 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:01:51.788400 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:01:51.855552 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:01:51.847275    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.848081    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.849574    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.849998    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.851493    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:01:51.847275    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.848081    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.849574    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.849998    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.851493    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:01:51.855615 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:01:51.855635 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:01:51.880660 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:01:51.880693 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 12:01:56.259727 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:58.760523 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:54.414807 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:54.425488 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:01:54.425558 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:01:54.453841 3219848 cri.go:89] found id: ""
	I1217 12:01:54.453870 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.453880 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:01:54.453887 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:01:54.453946 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:01:54.478957 3219848 cri.go:89] found id: ""
	I1217 12:01:54.478982 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.478991 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:01:54.478998 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:01:54.479060 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:01:54.504488 3219848 cri.go:89] found id: ""
	I1217 12:01:54.504516 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.504535 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:01:54.504543 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:01:54.504606 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:01:54.529418 3219848 cri.go:89] found id: ""
	I1217 12:01:54.529445 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.529454 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:01:54.529460 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:01:54.529519 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:01:54.557757 3219848 cri.go:89] found id: ""
	I1217 12:01:54.557781 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.557790 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:01:54.557797 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:01:54.557854 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:01:54.586961 3219848 cri.go:89] found id: ""
	I1217 12:01:54.586996 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.587004 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:01:54.587011 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:01:54.587077 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:01:54.612590 3219848 cri.go:89] found id: ""
	I1217 12:01:54.612617 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.612626 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:01:54.612633 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:01:54.612694 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:01:54.638207 3219848 cri.go:89] found id: ""
	I1217 12:01:54.638234 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.638243 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:01:54.638253 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:01:54.638264 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:01:54.695917 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:01:54.695955 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:01:54.712729 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:01:54.712759 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:01:54.782298 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:01:54.774102    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.774684    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.776463    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.776850    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.778510    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:01:54.774102    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.774684    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.776463    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.776850    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.778510    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:01:54.782321 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:01:54.782333 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:01:54.807165 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:01:54.807196 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:01:57.336099 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:57.346978 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:01:57.347048 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:01:57.371132 3219848 cri.go:89] found id: ""
	I1217 12:01:57.371155 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.371163 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:01:57.371169 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:01:57.371232 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:01:57.396905 3219848 cri.go:89] found id: ""
	I1217 12:01:57.396933 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.396942 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:01:57.396948 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:01:57.397011 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:01:57.425337 3219848 cri.go:89] found id: ""
	I1217 12:01:57.425366 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.425374 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:01:57.425381 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:01:57.425440 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:01:57.449681 3219848 cri.go:89] found id: ""
	I1217 12:01:57.449709 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.449718 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:01:57.449725 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:01:57.449784 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:01:57.475302 3219848 cri.go:89] found id: ""
	I1217 12:01:57.475328 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.475337 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:01:57.475343 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:01:57.475412 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:01:57.500270 3219848 cri.go:89] found id: ""
	I1217 12:01:57.500344 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.500369 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:01:57.500389 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:01:57.500509 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:01:57.527492 3219848 cri.go:89] found id: ""
	I1217 12:01:57.527519 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.527532 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:01:57.527538 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:01:57.527650 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:01:57.553482 3219848 cri.go:89] found id: ""
	I1217 12:01:57.553549 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.553576 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:01:57.553602 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:01:57.553627 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:01:57.609257 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:01:57.609292 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:01:57.625325 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:01:57.625352 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:01:57.691022 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:01:57.682604    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.683106    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.684793    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.685506    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.687043    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:01:57.682604    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.683106    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.684793    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.685506    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.687043    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:01:57.691048 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:01:57.691061 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:01:57.716301 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:01:57.716333 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 12:02:01.260216 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:02:02.764189 3212985 node_ready.go:38] duration metric: took 6m0.005070756s for node "no-preload-118262" to be "Ready" ...
	I1217 12:02:02.767452 3212985 out.go:203] 
	W1217 12:02:02.770608 3212985 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 12:02:02.770638 3212985 out.go:285] * 
	W1217 12:02:02.772986 3212985 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 12:02:02.776078 3212985 out.go:203] 
	I1217 12:02:00.244802 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:00.315692 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:00.315780 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:00.376798 3219848 cri.go:89] found id: ""
	I1217 12:02:00.376842 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.376852 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:00.376859 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:00.376949 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:00.414474 3219848 cri.go:89] found id: ""
	I1217 12:02:00.414502 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.414513 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:00.414520 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:00.414590 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:00.447266 3219848 cri.go:89] found id: ""
	I1217 12:02:00.447306 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.447316 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:00.447323 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:00.447415 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:00.477352 3219848 cri.go:89] found id: ""
	I1217 12:02:00.477378 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.477387 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:00.477394 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:00.477457 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:00.506577 3219848 cri.go:89] found id: ""
	I1217 12:02:00.506605 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.506614 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:00.506621 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:00.506720 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:00.533943 3219848 cri.go:89] found id: ""
	I1217 12:02:00.533966 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.533975 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:00.533982 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:00.534051 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:00.560396 3219848 cri.go:89] found id: ""
	I1217 12:02:00.560462 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.560472 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:00.560479 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:00.560573 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:00.587859 3219848 cri.go:89] found id: ""
	I1217 12:02:00.587931 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.587955 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:00.587983 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:00.588035 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:00.620134 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:00.620217 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:00.677187 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:00.677223 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:00.694138 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:00.694242 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:00.762938 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:00.753622    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.754338    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.755466    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.757073    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.757631    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:00.753622    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.754338    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.755466    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.757073    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.757631    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:00.763025 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:00.763058 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:01.167394 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:02:01.232118 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:02:01.232151 3219848 retry.go:31] will retry after 39.797194994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:02:03.292559 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:03.304708 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:03.304784 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:03.332491 3219848 cri.go:89] found id: ""
	I1217 12:02:03.332511 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.332519 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:03.332526 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:03.332630 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:03.361080 3219848 cri.go:89] found id: ""
	I1217 12:02:03.361107 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.361115 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:03.361121 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:03.361179 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:03.397354 3219848 cri.go:89] found id: ""
	I1217 12:02:03.397382 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.397391 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:03.397397 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:03.397473 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:03.431465 3219848 cri.go:89] found id: ""
	I1217 12:02:03.431493 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.431502 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:03.431509 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:03.431569 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:03.464102 3219848 cri.go:89] found id: ""
	I1217 12:02:03.464125 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.464133 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:03.464139 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:03.464197 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:03.497848 3219848 cri.go:89] found id: ""
	I1217 12:02:03.497879 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.497888 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:03.497895 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:03.497952 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:03.568108 3219848 cri.go:89] found id: ""
	I1217 12:02:03.568130 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.568139 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:03.568144 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:03.568202 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:03.632108 3219848 cri.go:89] found id: ""
	I1217 12:02:03.632136 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.632151 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:03.632161 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:03.632173 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:03.724972 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:03.708641    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.709073    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.716627    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.717278    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.719035    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:03.708641    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.709073    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.716627    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.717278    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.719035    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:03.725000 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:03.725012 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:03.753083 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:03.753174 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:03.790574 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:03.790596 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:03.863404 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:03.863488 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:04.837606 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:02:04.901525 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:02:04.901562 3219848 retry.go:31] will retry after 21.256241349s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:02:06.385200 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:06.395642 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:06.395734 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:06.422500 3219848 cri.go:89] found id: ""
	I1217 12:02:06.422526 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.422535 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:06.422542 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:06.422603 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:06.449741 3219848 cri.go:89] found id: ""
	I1217 12:02:06.449763 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.449773 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:06.449779 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:06.449836 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:06.478823 3219848 cri.go:89] found id: ""
	I1217 12:02:06.478844 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.478852 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:06.478858 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:06.478924 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:06.507270 3219848 cri.go:89] found id: ""
	I1217 12:02:06.507298 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.507307 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:06.507313 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:06.507390 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:06.536741 3219848 cri.go:89] found id: ""
	I1217 12:02:06.536774 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.536783 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:06.536790 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:06.536859 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:06.569124 3219848 cri.go:89] found id: ""
	I1217 12:02:06.569152 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.569161 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:06.569168 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:06.569223 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:06.597119 3219848 cri.go:89] found id: ""
	I1217 12:02:06.597140 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.597148 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:06.597155 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:06.597213 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:06.623129 3219848 cri.go:89] found id: ""
	I1217 12:02:06.623152 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.623161 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:06.623171 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:06.623181 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:06.679634 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:06.679669 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:06.696235 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:06.696273 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:06.764004 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:06.755277    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.755704    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.757595    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.758654    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.760132    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:06.755277    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.755704    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.757595    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.758654    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.760132    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:06.764031 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:06.764044 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:06.789440 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:06.789478 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:09.319544 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:09.335051 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:09.335144 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:09.363250 3219848 cri.go:89] found id: ""
	I1217 12:02:09.363278 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.363288 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:09.363296 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:09.363357 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:09.387533 3219848 cri.go:89] found id: ""
	I1217 12:02:09.387598 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.387624 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:09.387646 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:09.387735 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:09.411943 3219848 cri.go:89] found id: ""
	I1217 12:02:09.411970 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.411978 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:09.411985 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:09.412042 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:09.438061 3219848 cri.go:89] found id: ""
	I1217 12:02:09.438127 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.438151 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:09.438167 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:09.438250 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:09.463378 3219848 cri.go:89] found id: ""
	I1217 12:02:09.463407 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.463415 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:09.463422 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:09.463481 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:09.494069 3219848 cri.go:89] found id: ""
	I1217 12:02:09.494098 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.494107 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:09.494114 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:09.494178 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:09.526694 3219848 cri.go:89] found id: ""
	I1217 12:02:09.526771 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.526795 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:09.526815 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:09.526923 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:09.553523 3219848 cri.go:89] found id: ""
	I1217 12:02:09.553585 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.553616 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:09.553641 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:09.553678 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:09.618427 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:09.618463 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:09.634212 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:09.634244 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:09.696895 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:09.688293    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.688801    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.690481    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.690806    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.692990    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:09.688293    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.688801    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.690481    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.690806    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.692990    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:09.696914 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:09.696926 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:09.722288 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:09.722324 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:12.249861 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:12.261558 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:12.261626 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:12.293092 3219848 cri.go:89] found id: ""
	I1217 12:02:12.293113 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.293121 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:12.293128 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:12.293188 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:12.319347 3219848 cri.go:89] found id: ""
	I1217 12:02:12.319374 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.319384 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:12.319390 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:12.319448 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:12.343912 3219848 cri.go:89] found id: ""
	I1217 12:02:12.343939 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.343948 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:12.343955 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:12.344013 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:12.370544 3219848 cri.go:89] found id: ""
	I1217 12:02:12.370571 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.370581 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:12.370587 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:12.370645 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:12.397552 3219848 cri.go:89] found id: ""
	I1217 12:02:12.397578 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.397587 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:12.397593 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:12.397652 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:12.421606 3219848 cri.go:89] found id: ""
	I1217 12:02:12.421673 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.421699 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:12.421715 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:12.421791 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:12.447065 3219848 cri.go:89] found id: ""
	I1217 12:02:12.447088 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.447097 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:12.447103 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:12.447169 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:12.473547 3219848 cri.go:89] found id: ""
	I1217 12:02:12.473575 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.473583 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:12.473645 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:12.473670 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:12.489529 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:12.489559 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:12.574945 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:12.562789    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.567073    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.567687    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.569241    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.569901    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:12.562789    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.567073    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.567687    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.569241    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.569901    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:12.574970 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:12.574986 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:12.601521 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:12.601562 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:12.633893 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:12.633920 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:15.190960 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:15.202334 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:15.202461 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:15.231453 3219848 cri.go:89] found id: ""
	I1217 12:02:15.231486 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.231495 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:15.231507 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:15.231609 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:15.264097 3219848 cri.go:89] found id: ""
	I1217 12:02:15.264120 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.264129 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:15.264135 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:15.264196 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:15.293547 3219848 cri.go:89] found id: ""
	I1217 12:02:15.293574 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.293583 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:15.293589 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:15.293650 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:15.321905 3219848 cri.go:89] found id: ""
	I1217 12:02:15.321968 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.321991 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:15.322013 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:15.322084 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:15.349052 3219848 cri.go:89] found id: ""
	I1217 12:02:15.349085 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.349095 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:15.349102 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:15.349175 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:15.374350 3219848 cri.go:89] found id: ""
	I1217 12:02:15.374377 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.374387 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:15.374394 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:15.374457 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:15.412039 3219848 cri.go:89] found id: ""
	I1217 12:02:15.412066 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.412075 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:15.412082 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:15.412153 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:15.441228 3219848 cri.go:89] found id: ""
	I1217 12:02:15.441255 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.441265 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:15.441274 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:15.441309 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:15.467564 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:15.467601 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:15.501031 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:15.501100 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:15.564025 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:15.564059 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:15.581879 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:15.581906 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:15.647244 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:15.638661    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.639327    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.641006    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.641615    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.643194    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:15.638661    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.639327    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.641006    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.641615    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.643194    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:18.147543 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:18.158738 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:18.158817 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:18.184828 3219848 cri.go:89] found id: ""
	I1217 12:02:18.184853 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.184862 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:18.184869 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:18.184931 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:18.211904 3219848 cri.go:89] found id: ""
	I1217 12:02:18.211935 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.211944 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:18.211950 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:18.212010 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:18.237088 3219848 cri.go:89] found id: ""
	I1217 12:02:18.237154 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.237170 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:18.237177 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:18.237239 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:18.278916 3219848 cri.go:89] found id: ""
	I1217 12:02:18.278943 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.278953 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:18.278960 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:18.279018 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:18.307105 3219848 cri.go:89] found id: ""
	I1217 12:02:18.307133 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.307143 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:18.307150 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:18.307210 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:18.336099 3219848 cri.go:89] found id: ""
	I1217 12:02:18.336132 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.336141 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:18.336148 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:18.336217 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:18.362366 3219848 cri.go:89] found id: ""
	I1217 12:02:18.362432 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.362456 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:18.362472 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:18.362547 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:18.388125 3219848 cri.go:89] found id: ""
	I1217 12:02:18.388151 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.388160 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:18.388169 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:18.388180 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:18.456052 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:18.446941    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.447634    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.449296    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.449839    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.451474    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:18.446941    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.447634    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.449296    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.449839    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.451474    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:18.456114 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:18.456134 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:18.481868 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:18.481899 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:18.525523 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:18.525600 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:18.594163 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:18.594200 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:21.113595 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:21.124720 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:21.124792 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:21.150373 3219848 cri.go:89] found id: ""
	I1217 12:02:21.150397 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.150406 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:21.150412 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:21.150471 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:21.179044 3219848 cri.go:89] found id: ""
	I1217 12:02:21.179069 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.179078 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:21.179085 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:21.179156 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:21.205105 3219848 cri.go:89] found id: ""
	I1217 12:02:21.205132 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.205141 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:21.205147 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:21.205207 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:21.230210 3219848 cri.go:89] found id: ""
	I1217 12:02:21.230235 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.230243 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:21.230251 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:21.230328 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:21.265026 3219848 cri.go:89] found id: ""
	I1217 12:02:21.265052 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.265061 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:21.265068 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:21.265128 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:21.302976 3219848 cri.go:89] found id: ""
	I1217 12:02:21.303002 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.303017 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:21.303025 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:21.303097 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:21.333258 3219848 cri.go:89] found id: ""
	I1217 12:02:21.333282 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.333292 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:21.333299 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:21.333361 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:21.359283 3219848 cri.go:89] found id: ""
	I1217 12:02:21.359308 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.359317 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:21.359327 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:21.359338 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:21.416901 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:21.416944 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:21.433045 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:21.433074 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:21.505849 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:21.494474    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.495106    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.496640    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.497253    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.500699    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:21.494474    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.495106    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.496640    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.497253    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.500699    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:21.505920 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:21.505948 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:21.534970 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:21.535156 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:21.925292 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:02:21.990437 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:02:21.990546 3219848 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 12:02:24.077604 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:24.089001 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:24.089072 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:24.120652 3219848 cri.go:89] found id: ""
	I1217 12:02:24.120677 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.120688 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:24.120695 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:24.120755 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:24.147236 3219848 cri.go:89] found id: ""
	I1217 12:02:24.147263 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.147273 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:24.147280 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:24.147339 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:24.173122 3219848 cri.go:89] found id: ""
	I1217 12:02:24.173147 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.173157 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:24.173163 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:24.173223 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:24.207220 3219848 cri.go:89] found id: ""
	I1217 12:02:24.207243 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.207253 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:24.207259 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:24.207324 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:24.232981 3219848 cri.go:89] found id: ""
	I1217 12:02:24.233004 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.233013 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:24.233020 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:24.233087 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:24.266790 3219848 cri.go:89] found id: ""
	I1217 12:02:24.266815 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.266825 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:24.266832 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:24.266896 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:24.299029 3219848 cri.go:89] found id: ""
	I1217 12:02:24.299056 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.299065 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:24.299072 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:24.299150 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:24.332940 3219848 cri.go:89] found id: ""
	I1217 12:02:24.332966 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.332975 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:24.332984 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:24.332994 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:24.358486 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:24.358520 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:24.395087 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:24.395119 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:24.453543 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:24.453581 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:24.469070 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:24.469100 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:24.547838 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:24.537508    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.538320    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.540311    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.542086    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.542713    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:24.537508    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.538320    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.540311    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.542086    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.542713    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:26.158720 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:02:26.235734 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:02:26.235852 3219848 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 12:02:27.048020 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:27.058730 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:27.058803 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:27.083792 3219848 cri.go:89] found id: ""
	I1217 12:02:27.083815 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.083824 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:27.083831 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:27.083893 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:27.110794 3219848 cri.go:89] found id: ""
	I1217 12:02:27.110820 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.110841 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:27.110865 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:27.110940 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:27.136730 3219848 cri.go:89] found id: ""
	I1217 12:02:27.136760 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.136768 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:27.136775 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:27.136833 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:27.161755 3219848 cri.go:89] found id: ""
	I1217 12:02:27.161780 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.161813 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:27.161819 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:27.161886 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:27.187885 3219848 cri.go:89] found id: ""
	I1217 12:02:27.187912 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.187921 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:27.187928 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:27.187987 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:27.214398 3219848 cri.go:89] found id: ""
	I1217 12:02:27.214424 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.214432 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:27.214440 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:27.214528 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:27.240617 3219848 cri.go:89] found id: ""
	I1217 12:02:27.240642 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.240652 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:27.240658 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:27.240740 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:27.272907 3219848 cri.go:89] found id: ""
	I1217 12:02:27.272985 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.273008 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:27.273034 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:27.273061 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:27.338834 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:27.338872 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:27.355488 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:27.355518 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:27.425201 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:27.415325    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.415952    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.418308    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.419305    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.420311    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:27.415325    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.415952    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.418308    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.419305    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.420311    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:27.425231 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:27.425245 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:27.451232 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:27.451264 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:29.988282 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:29.998906 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:29.998982 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:30.032593 3219848 cri.go:89] found id: ""
	I1217 12:02:30.032619 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.032628 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:30.032635 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:30.032703 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:30.065200 3219848 cri.go:89] found id: ""
	I1217 12:02:30.065230 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.065239 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:30.065247 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:30.065319 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:30.100730 3219848 cri.go:89] found id: ""
	I1217 12:02:30.100758 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.100767 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:30.100773 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:30.100837 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:30.127247 3219848 cri.go:89] found id: ""
	I1217 12:02:30.127273 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.127293 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:30.127299 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:30.127380 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:30.156586 3219848 cri.go:89] found id: ""
	I1217 12:02:30.156611 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.156619 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:30.156627 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:30.156692 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:30.182150 3219848 cri.go:89] found id: ""
	I1217 12:02:30.182174 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.182215 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:30.182222 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:30.182285 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:30.209339 3219848 cri.go:89] found id: ""
	I1217 12:02:30.209366 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.209376 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:30.209383 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:30.209443 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:30.235224 3219848 cri.go:89] found id: ""
	I1217 12:02:30.235250 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.235259 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:30.235268 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:30.235279 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:30.305932 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:30.297455    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.298291    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.300025    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.300319    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.301797    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:30.297455    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.298291    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.300025    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.300319    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.301797    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:30.305955 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:30.305968 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:30.335249 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:30.335282 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:30.366831 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:30.366859 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:30.423045 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:30.423081 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:32.941855 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:32.953974 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:32.954052 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:32.986211 3219848 cri.go:89] found id: ""
	I1217 12:02:32.986233 3219848 logs.go:282] 0 containers: []
	W1217 12:02:32.986242 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:32.986249 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:32.986333 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:33.015180 3219848 cri.go:89] found id: ""
	I1217 12:02:33.015209 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.015218 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:33.015227 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:33.015292 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:33.043066 3219848 cri.go:89] found id: ""
	I1217 12:02:33.043132 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.043182 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:33.043216 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:33.043303 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:33.070150 3219848 cri.go:89] found id: ""
	I1217 12:02:33.070178 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.070187 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:33.070194 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:33.070254 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:33.099464 3219848 cri.go:89] found id: ""
	I1217 12:02:33.099502 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.099511 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:33.099519 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:33.099592 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:33.125134 3219848 cri.go:89] found id: ""
	I1217 12:02:33.125161 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.125170 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:33.125177 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:33.125238 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:33.152585 3219848 cri.go:89] found id: ""
	I1217 12:02:33.152608 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.152617 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:33.152638 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:33.152703 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:33.177715 3219848 cri.go:89] found id: ""
	I1217 12:02:33.177740 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.177749 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:33.177759 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:33.177770 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:33.234986 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:33.235024 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:33.255146 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:33.255186 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:33.339613 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:33.330741    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.331497    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.333272    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.333726    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.334950    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:33.330741    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.331497    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.333272    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.333726    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.334950    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:33.339647 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:33.339660 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:33.366064 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:33.366101 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:35.894549 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:35.904950 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:35.905022 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:35.933462 3219848 cri.go:89] found id: ""
	I1217 12:02:35.933485 3219848 logs.go:282] 0 containers: []
	W1217 12:02:35.933493 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:35.933499 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:35.933558 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:35.958161 3219848 cri.go:89] found id: ""
	I1217 12:02:35.958228 3219848 logs.go:282] 0 containers: []
	W1217 12:02:35.958254 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:35.958275 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:35.958364 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:35.983016 3219848 cri.go:89] found id: ""
	I1217 12:02:35.983041 3219848 logs.go:282] 0 containers: []
	W1217 12:02:35.983051 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:35.983057 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:35.983126 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:36.015482 3219848 cri.go:89] found id: ""
	I1217 12:02:36.015527 3219848 logs.go:282] 0 containers: []
	W1217 12:02:36.015536 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:36.015543 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:36.015620 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:36.046357 3219848 cri.go:89] found id: ""
	I1217 12:02:36.046393 3219848 logs.go:282] 0 containers: []
	W1217 12:02:36.046406 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:36.046416 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:36.046577 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:36.072553 3219848 cri.go:89] found id: ""
	I1217 12:02:36.072587 3219848 logs.go:282] 0 containers: []
	W1217 12:02:36.072596 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:36.072602 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:36.072662 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:36.099878 3219848 cri.go:89] found id: ""
	I1217 12:02:36.099911 3219848 logs.go:282] 0 containers: []
	W1217 12:02:36.099927 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:36.099934 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:36.100024 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:36.129180 3219848 cri.go:89] found id: ""
	I1217 12:02:36.129203 3219848 logs.go:282] 0 containers: []
	W1217 12:02:36.129212 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:36.129221 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:36.129234 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:36.186216 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:36.186254 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:36.203136 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:36.203166 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:36.273412 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:36.264653    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.265536    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.267226    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.267782    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.269421    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:36.264653    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.265536    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.267226    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.267782    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.269421    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:36.273433 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:36.273446 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:36.300346 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:36.300378 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:38.840293 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:38.851323 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:38.851395 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:38.878324 3219848 cri.go:89] found id: ""
	I1217 12:02:38.878347 3219848 logs.go:282] 0 containers: []
	W1217 12:02:38.878356 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:38.878362 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:38.878418 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:38.904803 3219848 cri.go:89] found id: ""
	I1217 12:02:38.904824 3219848 logs.go:282] 0 containers: []
	W1217 12:02:38.904833 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:38.904839 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:38.904897 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:38.929044 3219848 cri.go:89] found id: ""
	I1217 12:02:38.929067 3219848 logs.go:282] 0 containers: []
	W1217 12:02:38.929075 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:38.929081 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:38.929148 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:38.953075 3219848 cri.go:89] found id: ""
	I1217 12:02:38.953101 3219848 logs.go:282] 0 containers: []
	W1217 12:02:38.953109 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:38.953119 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:38.953179 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:38.982538 3219848 cri.go:89] found id: ""
	I1217 12:02:38.982560 3219848 logs.go:282] 0 containers: []
	W1217 12:02:38.982569 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:38.982575 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:38.982634 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:39.009774 3219848 cri.go:89] found id: ""
	I1217 12:02:39.009797 3219848 logs.go:282] 0 containers: []
	W1217 12:02:39.009806 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:39.009813 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:39.009877 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:39.035772 3219848 cri.go:89] found id: ""
	I1217 12:02:39.035848 3219848 logs.go:282] 0 containers: []
	W1217 12:02:39.035872 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:39.035894 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:39.035966 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:39.070261 3219848 cri.go:89] found id: ""
	I1217 12:02:39.070282 3219848 logs.go:282] 0 containers: []
	W1217 12:02:39.070291 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:39.070299 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:39.070311 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:39.086150 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:39.086228 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:39.158855 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:39.150093    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.151044    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.152764    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.153406    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.155059    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:39.150093    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.151044    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.152764    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.153406    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.155059    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:39.158917 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:39.158948 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:39.184120 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:39.184154 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:39.228401 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:39.228446 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:41.030449 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:02:41.099078 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:02:41.099186 3219848 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 12:02:41.102220 3219848 out.go:179] * Enabled addons: 
	I1217 12:02:41.105179 3219848 addons.go:530] duration metric: took 1m49.649331261s for enable addons: enabled=[]
	I1217 12:02:41.789011 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:41.800666 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:41.800741 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:41.831181 3219848 cri.go:89] found id: ""
	I1217 12:02:41.831214 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.831222 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:41.831229 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:41.831292 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:41.855868 3219848 cri.go:89] found id: ""
	I1217 12:02:41.855893 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.855901 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:41.855909 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:41.855970 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:41.880077 3219848 cri.go:89] found id: ""
	I1217 12:02:41.880102 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.880110 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:41.880117 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:41.880174 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:41.904526 3219848 cri.go:89] found id: ""
	I1217 12:02:41.904553 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.904562 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:41.904568 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:41.904630 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:41.930234 3219848 cri.go:89] found id: ""
	I1217 12:02:41.930257 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.930266 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:41.930272 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:41.930329 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:41.958809 3219848 cri.go:89] found id: ""
	I1217 12:02:41.958835 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.958844 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:41.958851 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:41.958909 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:41.983616 3219848 cri.go:89] found id: ""
	I1217 12:02:41.983642 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.983652 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:41.983658 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:41.983723 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:42.011680 3219848 cri.go:89] found id: ""
	I1217 12:02:42.011705 3219848 logs.go:282] 0 containers: []
	W1217 12:02:42.011714 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:42.011725 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:42.011736 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:42.073172 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:42.073215 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:42.092098 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:42.092139 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:42.170615 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:42.158978    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.160329    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.161071    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.163397    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.164052    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:42.158978    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.160329    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.161071    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.163397    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.164052    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:42.170644 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:42.170669 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:42.200096 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:42.200137 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:44.738108 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:44.751949 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:44.752049 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:44.785830 3219848 cri.go:89] found id: ""
	I1217 12:02:44.785869 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.785902 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:44.785911 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:44.785988 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:44.815102 3219848 cri.go:89] found id: ""
	I1217 12:02:44.815138 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.815148 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:44.815154 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:44.815256 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:44.843623 3219848 cri.go:89] found id: ""
	I1217 12:02:44.843658 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.843667 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:44.843674 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:44.843768 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:44.868589 3219848 cri.go:89] found id: ""
	I1217 12:02:44.868612 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.868620 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:44.868626 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:44.868710 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:44.893731 3219848 cri.go:89] found id: ""
	I1217 12:02:44.893757 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.893767 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:44.893774 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:44.893877 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:44.920703 3219848 cri.go:89] found id: ""
	I1217 12:02:44.920732 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.920741 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:44.920748 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:44.920807 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:44.945270 3219848 cri.go:89] found id: ""
	I1217 12:02:44.945307 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.945317 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:44.945323 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:44.945390 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:44.974571 3219848 cri.go:89] found id: ""
	I1217 12:02:44.974669 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.974693 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:44.974723 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:44.974767 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:45.011160 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:45.011262 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:45.135210 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:45.135297 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:45.172030 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:45.172125 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:45.299181 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:45.286225    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.288700    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.289610    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.291554    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.292270    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:45.286225    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.288700    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.289610    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.291554    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.292270    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:45.299256 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:45.299270 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:47.834408 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:47.845640 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:47.845713 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:47.875767 3219848 cri.go:89] found id: ""
	I1217 12:02:47.875793 3219848 logs.go:282] 0 containers: []
	W1217 12:02:47.875803 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:47.875809 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:47.875894 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:47.900760 3219848 cri.go:89] found id: ""
	I1217 12:02:47.900798 3219848 logs.go:282] 0 containers: []
	W1217 12:02:47.900808 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:47.900815 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:47.900916 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:47.925606 3219848 cri.go:89] found id: ""
	I1217 12:02:47.925640 3219848 logs.go:282] 0 containers: []
	W1217 12:02:47.925650 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:47.925656 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:47.925730 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:47.953896 3219848 cri.go:89] found id: ""
	I1217 12:02:47.953919 3219848 logs.go:282] 0 containers: []
	W1217 12:02:47.953928 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:47.953935 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:47.954003 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:47.979667 3219848 cri.go:89] found id: ""
	I1217 12:02:47.979736 3219848 logs.go:282] 0 containers: []
	W1217 12:02:47.979759 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:47.979780 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:47.979871 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:48.009398 3219848 cri.go:89] found id: ""
	I1217 12:02:48.009477 3219848 logs.go:282] 0 containers: []
	W1217 12:02:48.009502 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:48.009528 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:48.009630 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:48.039277 3219848 cri.go:89] found id: ""
	I1217 12:02:48.039349 3219848 logs.go:282] 0 containers: []
	W1217 12:02:48.039373 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:48.039400 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:48.039498 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:48.065115 3219848 cri.go:89] found id: ""
	I1217 12:02:48.065140 3219848 logs.go:282] 0 containers: []
	W1217 12:02:48.065151 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:48.065162 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:48.065175 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:48.081650 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:48.081680 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:48.149022 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:48.140864    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.141345    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.142918    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.143402    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.144920    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:48.140864    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.141345    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.142918    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.143402    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.144920    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:48.149046 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:48.149060 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:48.174962 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:48.174999 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:48.204617 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:48.204645 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:50.772582 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:50.784158 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:50.784228 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:50.814532 3219848 cri.go:89] found id: ""
	I1217 12:02:50.814555 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.814563 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:50.814569 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:50.814628 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:50.848966 3219848 cri.go:89] found id: ""
	I1217 12:02:50.848989 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.848997 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:50.849004 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:50.849066 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:50.873257 3219848 cri.go:89] found id: ""
	I1217 12:02:50.873284 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.873293 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:50.873300 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:50.873364 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:50.897538 3219848 cri.go:89] found id: ""
	I1217 12:02:50.897564 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.897573 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:50.897579 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:50.897638 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:50.922912 3219848 cri.go:89] found id: ""
	I1217 12:02:50.922937 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.922946 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:50.922953 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:50.923013 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:50.948094 3219848 cri.go:89] found id: ""
	I1217 12:02:50.948120 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.948129 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:50.948136 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:50.948196 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:50.974087 3219848 cri.go:89] found id: ""
	I1217 12:02:50.974114 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.974124 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:50.974131 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:50.974190 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:51.006127 3219848 cri.go:89] found id: ""
	I1217 12:02:51.006159 3219848 logs.go:282] 0 containers: []
	W1217 12:02:51.006169 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:51.006256 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:51.006275 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:51.032290 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:51.032323 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:51.063443 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:51.063469 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:51.119487 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:51.119523 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:51.138001 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:51.138031 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:51.208764 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:51.200371    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.201009    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.202548    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.202968    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.204568    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:51.200371    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.201009    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.202548    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.202968    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.204568    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:53.709691 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:53.720597 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:53.720678 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:53.745776 3219848 cri.go:89] found id: ""
	I1217 12:02:53.745802 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.745811 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:53.745819 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:53.745878 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:53.775989 3219848 cri.go:89] found id: ""
	I1217 12:02:53.776013 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.776021 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:53.776027 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:53.776098 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:53.810226 3219848 cri.go:89] found id: ""
	I1217 12:02:53.810253 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.810262 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:53.810269 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:53.810333 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:53.839758 3219848 cri.go:89] found id: ""
	I1217 12:02:53.839778 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.839787 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:53.839793 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:53.839857 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:53.864680 3219848 cri.go:89] found id: ""
	I1217 12:02:53.864745 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.864768 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:53.864788 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:53.864872 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:53.888540 3219848 cri.go:89] found id: ""
	I1217 12:02:53.888561 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.888569 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:53.888576 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:53.888640 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:53.912908 3219848 cri.go:89] found id: ""
	I1217 12:02:53.912973 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.912998 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:53.913015 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:53.913087 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:53.942233 3219848 cri.go:89] found id: ""
	I1217 12:02:53.942254 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.942263 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:53.942285 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:53.942300 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:53.998450 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:53.998485 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:54.017836 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:54.017867 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:54.086072 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:54.077439    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.078327    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.079921    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.080399    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.082101    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:54.077439    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.078327    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.079921    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.080399    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.082101    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:54.086097 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:54.086110 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:54.112391 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:54.112586 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:56.648110 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:56.658791 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:56.658863 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:56.685484 3219848 cri.go:89] found id: ""
	I1217 12:02:56.685508 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.685516 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:56.685526 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:56.685587 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:56.710064 3219848 cri.go:89] found id: ""
	I1217 12:02:56.710126 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.710141 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:56.710148 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:56.710219 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:56.735357 3219848 cri.go:89] found id: ""
	I1217 12:02:56.735383 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.735393 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:56.735404 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:56.735465 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:56.767684 3219848 cri.go:89] found id: ""
	I1217 12:02:56.767710 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.767724 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:56.767731 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:56.767792 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:56.809924 3219848 cri.go:89] found id: ""
	I1217 12:02:56.809951 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.809960 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:56.809968 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:56.810026 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:56.839853 3219848 cri.go:89] found id: ""
	I1217 12:02:56.839879 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.839889 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:56.839895 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:56.839956 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:56.866637 3219848 cri.go:89] found id: ""
	I1217 12:02:56.866663 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.866672 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:56.866679 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:56.866746 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:56.891828 3219848 cri.go:89] found id: ""
	I1217 12:02:56.891853 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.891862 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:56.891872 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:56.891885 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:56.948612 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:56.948652 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:56.964832 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:56.964864 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:57.035706 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:57.026894    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.027527    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.029280    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.030006    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.031607    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:57.026894    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.027527    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.029280    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.030006    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.031607    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:57.035725 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:57.035783 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:57.061297 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:57.061332 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:59.592887 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:59.603568 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:59.603647 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:59.628351 3219848 cri.go:89] found id: ""
	I1217 12:02:59.628378 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.628387 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:59.628395 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:59.628503 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:59.654358 3219848 cri.go:89] found id: ""
	I1217 12:02:59.654380 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.654388 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:59.654394 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:59.654456 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:59.679684 3219848 cri.go:89] found id: ""
	I1217 12:02:59.679703 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.679717 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:59.679723 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:59.679786 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:59.706460 3219848 cri.go:89] found id: ""
	I1217 12:02:59.706491 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.706501 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:59.706507 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:59.706570 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:59.736016 3219848 cri.go:89] found id: ""
	I1217 12:02:59.736041 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.736050 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:59.736057 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:59.736116 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:59.778297 3219848 cri.go:89] found id: ""
	I1217 12:02:59.778323 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.778332 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:59.778339 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:59.778404 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:59.809983 3219848 cri.go:89] found id: ""
	I1217 12:02:59.810009 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.810018 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:59.810025 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:59.810082 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:59.843076 3219848 cri.go:89] found id: ""
	I1217 12:02:59.843102 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.843110 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:59.843119 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:59.843131 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:59.902975 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:59.903012 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:59.918923 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:59.918958 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:59.987681 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:59.979645    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.980298    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.981764    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.982249    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.983739    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:59.979645    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.980298    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.981764    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.982249    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.983739    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:59.987704 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:59.987716 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:00.126179 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:00.128746 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:02.747342 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:02.759443 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:02.759536 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:02.813879 3219848 cri.go:89] found id: ""
	I1217 12:03:02.813907 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.813917 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:02.813924 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:02.813996 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:02.856869 3219848 cri.go:89] found id: ""
	I1217 12:03:02.856899 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.856908 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:02.856915 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:02.856973 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:02.883984 3219848 cri.go:89] found id: ""
	I1217 12:03:02.884015 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.884024 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:02.884031 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:02.884094 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:02.911584 3219848 cri.go:89] found id: ""
	I1217 12:03:02.911605 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.911613 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:02.911619 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:02.911677 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:02.941815 3219848 cri.go:89] found id: ""
	I1217 12:03:02.941837 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.941847 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:02.941853 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:02.941920 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:02.971949 3219848 cri.go:89] found id: ""
	I1217 12:03:02.971972 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.971980 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:02.971986 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:02.972045 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:02.997848 3219848 cri.go:89] found id: ""
	I1217 12:03:02.997875 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.997884 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:02.997891 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:02.997952 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:03.025293 3219848 cri.go:89] found id: ""
	I1217 12:03:03.025321 3219848 logs.go:282] 0 containers: []
	W1217 12:03:03.025330 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:03.025339 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:03.025353 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:03.095479 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:03.086357    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.087966    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.088719    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.089902    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.090320    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:03.086357    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.087966    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.088719    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.089902    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.090320    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:03.095503 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:03.095517 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:03.121627 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:03.121668 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:03.152132 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:03.152162 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:03.208671 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:03.208717 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:05.726193 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:05.737765 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:05.737842 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:05.803315 3219848 cri.go:89] found id: ""
	I1217 12:03:05.803338 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.803355 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:05.803364 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:05.803424 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:05.852889 3219848 cri.go:89] found id: ""
	I1217 12:03:05.852952 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.852967 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:05.852975 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:05.853035 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:05.885239 3219848 cri.go:89] found id: ""
	I1217 12:03:05.885263 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.885274 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:05.885281 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:05.885346 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:05.909571 3219848 cri.go:89] found id: ""
	I1217 12:03:05.909601 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.909610 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:05.909617 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:05.909683 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:05.944648 3219848 cri.go:89] found id: ""
	I1217 12:03:05.944714 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.944729 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:05.944742 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:05.944801 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:05.969671 3219848 cri.go:89] found id: ""
	I1217 12:03:05.969707 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.969716 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:05.969738 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:05.969819 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:05.994549 3219848 cri.go:89] found id: ""
	I1217 12:03:05.994575 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.994584 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:05.994590 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:05.994648 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:06.025175 3219848 cri.go:89] found id: ""
	I1217 12:03:06.025201 3219848 logs.go:282] 0 containers: []
	W1217 12:03:06.025212 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:06.025223 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:06.025255 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:06.094463 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:06.085807    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.086594    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.088396    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.089018    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.090252    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:06.085807    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.086594    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.088396    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.089018    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.090252    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:06.094488 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:06.094503 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:06.120857 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:06.120892 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:06.148825 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:06.148854 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:06.207501 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:06.207537 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:08.724013 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:08.734763 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:08.734854 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:08.797461 3219848 cri.go:89] found id: ""
	I1217 12:03:08.797536 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.797561 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:08.797583 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:08.797692 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:08.849950 3219848 cri.go:89] found id: ""
	I1217 12:03:08.850015 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.850031 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:08.850039 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:08.850099 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:08.876353 3219848 cri.go:89] found id: ""
	I1217 12:03:08.876378 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.876387 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:08.876394 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:08.876474 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:08.902743 3219848 cri.go:89] found id: ""
	I1217 12:03:08.902767 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.902776 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:08.902783 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:08.902847 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:08.928380 3219848 cri.go:89] found id: ""
	I1217 12:03:08.928405 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.928439 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:08.928447 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:08.928508 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:08.953372 3219848 cri.go:89] found id: ""
	I1217 12:03:08.953397 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.953406 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:08.953413 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:08.953481 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:08.977913 3219848 cri.go:89] found id: ""
	I1217 12:03:08.977935 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.977945 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:08.977951 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:08.978015 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:09.014088 3219848 cri.go:89] found id: ""
	I1217 12:03:09.014114 3219848 logs.go:282] 0 containers: []
	W1217 12:03:09.014123 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:09.014133 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:09.014144 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:09.069559 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:09.069599 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:09.085849 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:09.085877 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:09.153859 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:09.145727    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.146529    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.148157    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.148779    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.150028    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:09.145727    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.146529    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.148157    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.148779    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.150028    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:09.153879 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:09.153892 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:09.179067 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:09.179099 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:11.708448 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:11.719221 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:11.719291 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:11.744009 3219848 cri.go:89] found id: ""
	I1217 12:03:11.744033 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.744042 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:11.744048 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:11.744104 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:11.795640 3219848 cri.go:89] found id: ""
	I1217 12:03:11.795663 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.795671 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:11.795678 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:11.795739 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:11.851553 3219848 cri.go:89] found id: ""
	I1217 12:03:11.851573 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.851581 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:11.851587 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:11.851642 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:11.879197 3219848 cri.go:89] found id: ""
	I1217 12:03:11.879272 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.879294 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:11.879316 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:11.879432 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:11.904743 3219848 cri.go:89] found id: ""
	I1217 12:03:11.904816 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.904839 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:11.904864 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:11.904974 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:11.930378 3219848 cri.go:89] found id: ""
	I1217 12:03:11.930452 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.930482 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:11.930491 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:11.930562 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:11.955446 3219848 cri.go:89] found id: ""
	I1217 12:03:11.955475 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.955485 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:11.955492 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:11.955553 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:11.980056 3219848 cri.go:89] found id: ""
	I1217 12:03:11.980082 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.980092 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:11.980102 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:11.980113 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:12.039392 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:12.039430 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:12.055724 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:12.055752 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:12.120835 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:12.111964    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.112751    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.114462    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.114770    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.116985    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:12.111964    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.112751    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.114462    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.114770    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.116985    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:12.120858 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:12.120871 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:12.145568 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:12.145601 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:14.685252 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:14.695909 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:14.695982 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:14.722094 3219848 cri.go:89] found id: ""
	I1217 12:03:14.722116 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.722124 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:14.722131 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:14.722191 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:14.747765 3219848 cri.go:89] found id: ""
	I1217 12:03:14.747790 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.747799 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:14.747805 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:14.747863 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:14.832061 3219848 cri.go:89] found id: ""
	I1217 12:03:14.832086 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.832096 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:14.832103 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:14.832175 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:14.861589 3219848 cri.go:89] found id: ""
	I1217 12:03:14.861612 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.861621 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:14.861628 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:14.861687 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:14.887122 3219848 cri.go:89] found id: ""
	I1217 12:03:14.887144 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.887153 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:14.887160 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:14.887219 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:14.913961 3219848 cri.go:89] found id: ""
	I1217 12:03:14.913988 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.913996 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:14.914003 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:14.914063 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:14.940509 3219848 cri.go:89] found id: ""
	I1217 12:03:14.940539 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.940584 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:14.940599 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:14.940684 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:14.968190 3219848 cri.go:89] found id: ""
	I1217 12:03:14.968260 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.968286 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:14.968314 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:14.968341 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:15.025687 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:15.025728 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:15.048063 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:15.048161 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:15.120549 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:15.111260    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.111932    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.113791    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.114487    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.116204    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:15.111260    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.111932    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.113791    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.114487    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.116204    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:15.120575 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:15.120590 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:15.147374 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:15.147419 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:17.678613 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:17.689902 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:17.689996 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:17.715580 3219848 cri.go:89] found id: ""
	I1217 12:03:17.715617 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.715626 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:17.715634 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:17.715706 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:17.746656 3219848 cri.go:89] found id: ""
	I1217 12:03:17.746680 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.746689 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:17.746696 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:17.746757 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:17.777911 3219848 cri.go:89] found id: ""
	I1217 12:03:17.777981 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.778005 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:17.778031 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:17.778142 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:17.841621 3219848 cri.go:89] found id: ""
	I1217 12:03:17.841682 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.841714 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:17.841734 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:17.841839 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:17.874462 3219848 cri.go:89] found id: ""
	I1217 12:03:17.874536 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.874559 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:17.874573 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:17.874655 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:17.899519 3219848 cri.go:89] found id: ""
	I1217 12:03:17.899563 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.899573 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:17.899580 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:17.899654 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:17.925535 3219848 cri.go:89] found id: ""
	I1217 12:03:17.925559 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.925568 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:17.925574 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:17.925642 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:17.950672 3219848 cri.go:89] found id: ""
	I1217 12:03:17.950737 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.950761 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:17.950787 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:17.950826 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:18.006915 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:18.006964 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:18.024598 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:18.024632 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:18.093800 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:18.085487    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.086439    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.087176    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.088142    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.089680    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:18.085487    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.086439    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.087176    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.088142    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.089680    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:18.093830 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:18.093843 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:18.120115 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:18.120150 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:20.651699 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:20.662809 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:20.662885 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:20.692750 3219848 cri.go:89] found id: ""
	I1217 12:03:20.692772 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.692781 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:20.692787 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:20.692854 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:20.723234 3219848 cri.go:89] found id: ""
	I1217 12:03:20.723259 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.723267 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:20.723273 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:20.723334 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:20.749812 3219848 cri.go:89] found id: ""
	I1217 12:03:20.749833 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.749841 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:20.749847 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:20.749903 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:20.799186 3219848 cri.go:89] found id: ""
	I1217 12:03:20.799208 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.799216 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:20.799222 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:20.799280 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:20.850498 3219848 cri.go:89] found id: ""
	I1217 12:03:20.850573 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.850596 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:20.850617 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:20.850735 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:20.881588 3219848 cri.go:89] found id: ""
	I1217 12:03:20.881660 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.881682 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:20.881702 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:20.881790 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:20.911209 3219848 cri.go:89] found id: ""
	I1217 12:03:20.911275 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.911301 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:20.911316 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:20.911391 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:20.938447 3219848 cri.go:89] found id: ""
	I1217 12:03:20.938473 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.938483 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:20.938492 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:20.938503 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:20.995421 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:20.995463 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:21.013450 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:21.013483 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:21.084404 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:21.075746    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.076533    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.078205    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.078900    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.080479    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:21.075746    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.076533    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.078205    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.078900    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.080479    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:21.084449 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:21.084463 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:21.111296 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:21.111335 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:23.647949 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:23.658668 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:23.658737 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:23.685275 3219848 cri.go:89] found id: ""
	I1217 12:03:23.685298 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.685307 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:23.685314 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:23.685375 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:23.711416 3219848 cri.go:89] found id: ""
	I1217 12:03:23.711466 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.711478 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:23.711485 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:23.711549 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:23.738391 3219848 cri.go:89] found id: ""
	I1217 12:03:23.738418 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.738427 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:23.738433 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:23.738492 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:23.801227 3219848 cri.go:89] found id: ""
	I1217 12:03:23.801253 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.801262 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:23.801268 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:23.801327 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:23.837564 3219848 cri.go:89] found id: ""
	I1217 12:03:23.837585 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.837593 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:23.837600 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:23.837660 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:23.864057 3219848 cri.go:89] found id: ""
	I1217 12:03:23.864078 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.864086 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:23.864093 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:23.864159 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:23.888263 3219848 cri.go:89] found id: ""
	I1217 12:03:23.888289 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.888298 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:23.888305 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:23.888363 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:23.917533 3219848 cri.go:89] found id: ""
	I1217 12:03:23.917555 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.917564 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:23.917573 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:23.917584 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:23.946496 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:23.946525 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:24.003650 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:24.003697 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:24.022449 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:24.022482 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:24.093823 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:24.084998    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.085736    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.087440    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.088190    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.089867    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:24.084998    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.085736    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.087440    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.088190    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.089867    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:24.093845 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:24.093858 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:26.622844 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:26.634100 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:26.634173 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:26.662315 3219848 cri.go:89] found id: ""
	I1217 12:03:26.662341 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.662350 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:26.662357 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:26.662417 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:26.689598 3219848 cri.go:89] found id: ""
	I1217 12:03:26.689623 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.689633 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:26.689640 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:26.689704 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:26.716815 3219848 cri.go:89] found id: ""
	I1217 12:03:26.716841 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.716850 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:26.716858 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:26.716926 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:26.743338 3219848 cri.go:89] found id: ""
	I1217 12:03:26.743364 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.743375 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:26.743382 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:26.743447 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:26.799290 3219848 cri.go:89] found id: ""
	I1217 12:03:26.799326 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.799335 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:26.799342 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:26.799412 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:26.854473 3219848 cri.go:89] found id: ""
	I1217 12:03:26.854539 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.854555 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:26.854563 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:26.854625 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:26.880552 3219848 cri.go:89] found id: ""
	I1217 12:03:26.880581 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.880591 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:26.880598 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:26.880659 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:26.906009 3219848 cri.go:89] found id: ""
	I1217 12:03:26.906042 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.906052 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:26.906061 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:26.906072 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:26.971795 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:26.963736    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.964328    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.965821    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.966197    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.967699    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:26.963736    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.964328    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.965821    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.966197    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.967699    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:26.971818 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:26.971831 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:26.996929 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:26.996968 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:27.031442 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:27.031479 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:27.088296 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:27.088330 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:29.604978 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:29.615685 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:29.615754 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:29.642346 3219848 cri.go:89] found id: ""
	I1217 12:03:29.642375 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.642384 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:29.642391 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:29.642449 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:29.669188 3219848 cri.go:89] found id: ""
	I1217 12:03:29.669214 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.669223 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:29.669230 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:29.669293 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:29.695623 3219848 cri.go:89] found id: ""
	I1217 12:03:29.695648 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.695657 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:29.695663 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:29.695729 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:29.721447 3219848 cri.go:89] found id: ""
	I1217 12:03:29.721472 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.721482 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:29.721489 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:29.721551 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:29.746217 3219848 cri.go:89] found id: ""
	I1217 12:03:29.746244 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.746253 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:29.746261 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:29.746318 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:29.797088 3219848 cri.go:89] found id: ""
	I1217 12:03:29.797122 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.797131 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:29.797137 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:29.797210 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:29.845942 3219848 cri.go:89] found id: ""
	I1217 12:03:29.845962 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.845971 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:29.845977 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:29.846041 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:29.881686 3219848 cri.go:89] found id: ""
	I1217 12:03:29.881714 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.881723 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:29.881733 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:29.881745 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:29.938916 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:29.938949 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:29.954625 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:29.954702 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:30.048700 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:30.033826    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.034802    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.036344    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.036964    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.039023    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:30.033826    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.034802    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.036344    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.036964    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.039023    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:30.048776 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:30.048805 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:30.081544 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:30.081588 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:32.617502 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:32.628255 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:32.628328 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:32.653287 3219848 cri.go:89] found id: ""
	I1217 12:03:32.653314 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.653323 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:32.653331 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:32.653393 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:32.678914 3219848 cri.go:89] found id: ""
	I1217 12:03:32.678938 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.678946 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:32.678952 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:32.679013 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:32.705809 3219848 cri.go:89] found id: ""
	I1217 12:03:32.705835 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.705845 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:32.705852 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:32.705915 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:32.736249 3219848 cri.go:89] found id: ""
	I1217 12:03:32.736278 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.736294 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:32.736301 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:32.736382 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:32.777637 3219848 cri.go:89] found id: ""
	I1217 12:03:32.777666 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.777676 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:32.777684 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:32.777749 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:32.848686 3219848 cri.go:89] found id: ""
	I1217 12:03:32.848726 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.848735 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:32.848742 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:32.848811 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:32.877608 3219848 cri.go:89] found id: ""
	I1217 12:03:32.877633 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.877643 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:32.877650 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:32.877715 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:32.912387 3219848 cri.go:89] found id: ""
	I1217 12:03:32.912443 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.912453 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:32.912463 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:32.912478 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:32.973780 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:32.965664    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.966474    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.968080    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.968441    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.969916    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:32.965664    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.966474    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.968080    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.968441    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.969916    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:32.973802 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:32.973816 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:32.999779 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:32.999818 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:33.035424 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:33.035456 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:33.095096 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:33.095136 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:35.611791 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:35.625472 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:35.625546 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:35.656243 3219848 cri.go:89] found id: ""
	I1217 12:03:35.656265 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.656273 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:35.656280 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:35.656339 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:35.681938 3219848 cri.go:89] found id: ""
	I1217 12:03:35.681964 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.681972 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:35.681978 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:35.682038 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:35.711864 3219848 cri.go:89] found id: ""
	I1217 12:03:35.711887 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.711896 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:35.711902 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:35.711961 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:35.736900 3219848 cri.go:89] found id: ""
	I1217 12:03:35.736924 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.736932 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:35.736942 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:35.737002 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:35.796476 3219848 cri.go:89] found id: ""
	I1217 12:03:35.796553 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.796576 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:35.796598 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:35.796711 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:35.851385 3219848 cri.go:89] found id: ""
	I1217 12:03:35.851463 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.851487 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:35.851530 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:35.851627 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:35.879315 3219848 cri.go:89] found id: ""
	I1217 12:03:35.879388 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.879423 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:35.879447 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:35.879560 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:35.904369 3219848 cri.go:89] found id: ""
	I1217 12:03:35.904461 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.904485 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:35.904509 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:35.904539 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:35.962316 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:35.962358 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:35.978473 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:35.978503 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:36.048228 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:36.039946    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.040655    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.042240    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.042853    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.043967    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:36.039946    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.040655    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.042240    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.042853    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.043967    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:36.048254 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:36.048267 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:36.075099 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:36.075134 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:38.607418 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:38.618789 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:38.618869 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:38.646270 3219848 cri.go:89] found id: ""
	I1217 12:03:38.646297 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.646307 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:38.646315 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:38.646379 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:38.671906 3219848 cri.go:89] found id: ""
	I1217 12:03:38.671931 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.671940 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:38.671947 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:38.672012 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:38.696480 3219848 cri.go:89] found id: ""
	I1217 12:03:38.696504 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.696513 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:38.696520 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:38.696581 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:38.727000 3219848 cri.go:89] found id: ""
	I1217 12:03:38.727026 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.727036 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:38.727042 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:38.727114 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:38.782353 3219848 cri.go:89] found id: ""
	I1217 12:03:38.782381 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.782391 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:38.782398 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:38.782459 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:38.847087 3219848 cri.go:89] found id: ""
	I1217 12:03:38.847110 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.847118 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:38.847125 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:38.847183 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:38.874682 3219848 cri.go:89] found id: ""
	I1217 12:03:38.874704 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.874712 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:38.874718 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:38.874780 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:38.902269 3219848 cri.go:89] found id: ""
	I1217 12:03:38.902297 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.902306 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:38.902316 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:38.902331 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:38.967646 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:38.958671    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.959248    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.961005    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.961508    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.963014    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:38.958671    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.959248    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.961005    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.961508    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.963014    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:38.967671 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:38.967685 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:38.993086 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:38.993121 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:39.024046 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:39.024079 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:39.080928 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:39.080962 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:41.597202 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:41.608508 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:41.608582 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:41.634319 3219848 cri.go:89] found id: ""
	I1217 12:03:41.634344 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.634359 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:41.634366 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:41.634427 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:41.660053 3219848 cri.go:89] found id: ""
	I1217 12:03:41.660076 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.660085 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:41.660092 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:41.660159 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:41.686022 3219848 cri.go:89] found id: ""
	I1217 12:03:41.686047 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.686056 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:41.686062 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:41.686119 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:41.711689 3219848 cri.go:89] found id: ""
	I1217 12:03:41.711714 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.711723 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:41.711729 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:41.711798 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:41.738135 3219848 cri.go:89] found id: ""
	I1217 12:03:41.738161 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.738170 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:41.738177 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:41.738235 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:41.794953 3219848 cri.go:89] found id: ""
	I1217 12:03:41.794975 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.794984 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:41.794991 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:41.795051 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:41.832712 3219848 cri.go:89] found id: ""
	I1217 12:03:41.832747 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.832755 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:41.832762 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:41.832872 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:41.862947 3219848 cri.go:89] found id: ""
	I1217 12:03:41.862967 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.862976 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:41.862985 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:41.862996 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:41.888484 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:41.888519 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:41.919432 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:41.919461 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:41.979083 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:41.979117 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:41.995225 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:41.995256 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:42.068500 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:42.058178    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.059172    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.061060    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.061946    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.063848    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:42.058178    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.059172    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.061060    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.061946    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.063848    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:44.569152 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:44.579717 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:44.579791 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:44.604579 3219848 cri.go:89] found id: ""
	I1217 12:03:44.604605 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.604614 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:44.604621 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:44.604680 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:44.628954 3219848 cri.go:89] found id: ""
	I1217 12:03:44.628987 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.628997 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:44.629004 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:44.629066 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:44.657345 3219848 cri.go:89] found id: ""
	I1217 12:03:44.657372 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.657381 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:44.657388 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:44.657445 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:44.682960 3219848 cri.go:89] found id: ""
	I1217 12:03:44.682983 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.683000 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:44.683007 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:44.683066 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:44.712406 3219848 cri.go:89] found id: ""
	I1217 12:03:44.712451 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.712461 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:44.712468 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:44.712526 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:44.737929 3219848 cri.go:89] found id: ""
	I1217 12:03:44.737952 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.737961 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:44.737967 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:44.738027 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:44.778893 3219848 cri.go:89] found id: ""
	I1217 12:03:44.778921 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.778930 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:44.778938 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:44.779003 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:44.818695 3219848 cri.go:89] found id: ""
	I1217 12:03:44.818724 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.818733 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:44.818742 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:44.818754 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:44.888711 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:44.888748 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:44.905193 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:44.905224 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:44.969126 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:44.960653    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.961469    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.963160    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.963503    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.964997    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:44.960653    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.961469    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.963160    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.963503    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.964997    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:44.969149 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:44.969162 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:44.995233 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:44.995272 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:47.580853 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:47.591106 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:47.591173 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:47.616262 3219848 cri.go:89] found id: ""
	I1217 12:03:47.616294 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.616304 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:47.616317 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:47.616384 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:47.641674 3219848 cri.go:89] found id: ""
	I1217 12:03:47.641702 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.641712 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:47.641718 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:47.641778 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:47.667191 3219848 cri.go:89] found id: ""
	I1217 12:03:47.667215 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.667224 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:47.667230 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:47.667296 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:47.696304 3219848 cri.go:89] found id: ""
	I1217 12:03:47.696332 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.696341 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:47.696349 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:47.696412 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:47.726109 3219848 cri.go:89] found id: ""
	I1217 12:03:47.726134 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.726143 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:47.726149 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:47.726212 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:47.762878 3219848 cri.go:89] found id: ""
	I1217 12:03:47.762904 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.762914 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:47.762920 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:47.762977 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:47.824894 3219848 cri.go:89] found id: ""
	I1217 12:03:47.824932 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.824957 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:47.824973 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:47.825056 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:47.851816 3219848 cri.go:89] found id: ""
	I1217 12:03:47.851852 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.851861 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:47.851888 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:47.851907 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:47.908314 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:47.908352 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:47.924222 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:47.924250 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:47.986251 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:47.978126    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.978646    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.980334    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.980816    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.982319    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:47.978126    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.978646    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.980334    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.980816    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.982319    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:47.986276 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:47.986290 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:48.010815 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:48.010855 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:50.542164 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:50.553364 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:50.553437 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:50.581389 3219848 cri.go:89] found id: ""
	I1217 12:03:50.581423 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.581432 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:50.581439 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:50.581508 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:50.610382 3219848 cri.go:89] found id: ""
	I1217 12:03:50.610405 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.610413 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:50.610422 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:50.610482 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:50.636111 3219848 cri.go:89] found id: ""
	I1217 12:03:50.636137 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.636147 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:50.636153 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:50.636218 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:50.661308 3219848 cri.go:89] found id: ""
	I1217 12:03:50.661334 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.661342 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:50.661350 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:50.661415 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:50.688144 3219848 cri.go:89] found id: ""
	I1217 12:03:50.688172 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.688181 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:50.688187 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:50.688251 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:50.715059 3219848 cri.go:89] found id: ""
	I1217 12:03:50.715087 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.715096 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:50.715103 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:50.715165 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:50.745229 3219848 cri.go:89] found id: ""
	I1217 12:03:50.745253 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.745262 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:50.745269 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:50.745330 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:50.793705 3219848 cri.go:89] found id: ""
	I1217 12:03:50.793735 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.793743 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:50.793752 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:50.793763 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:50.876190 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:50.876229 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:50.893552 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:50.893581 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:50.960907 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:50.951439    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.952410    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.954030    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.954408    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.956833    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:50.951439    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.952410    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.954030    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.954408    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.956833    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:50.960928 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:50.960942 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:50.986454 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:50.986485 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:53.522123 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:53.533167 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:53.533246 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:53.558553 3219848 cri.go:89] found id: ""
	I1217 12:03:53.558580 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.558589 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:53.558596 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:53.558668 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:53.586267 3219848 cri.go:89] found id: ""
	I1217 12:03:53.586295 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.586305 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:53.586318 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:53.586383 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:53.613148 3219848 cri.go:89] found id: ""
	I1217 12:03:53.613174 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.613183 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:53.613190 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:53.613251 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:53.639336 3219848 cri.go:89] found id: ""
	I1217 12:03:53.639371 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.639381 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:53.639387 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:53.639452 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:53.664632 3219848 cri.go:89] found id: ""
	I1217 12:03:53.664700 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.664730 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:53.664745 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:53.664820 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:53.689663 3219848 cri.go:89] found id: ""
	I1217 12:03:53.689733 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.689760 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:53.689774 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:53.689851 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:53.714636 3219848 cri.go:89] found id: ""
	I1217 12:03:53.714707 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.714733 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:53.714747 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:53.714827 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:53.744583 3219848 cri.go:89] found id: ""
	I1217 12:03:53.744610 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.744620 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:53.744629 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:53.744640 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:53.833845 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:53.833884 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:53.853606 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:53.853632 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:53.921245 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:53.912685    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.913171    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.914992    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.915543    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.917157    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:53.912685    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.913171    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.914992    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.915543    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.917157    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:53.921269 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:53.921282 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:53.946578 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:53.946611 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:56.477034 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:56.488539 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:56.488622 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:56.514320 3219848 cri.go:89] found id: ""
	I1217 12:03:56.514347 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.514356 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:56.514363 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:56.514426 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:56.540629 3219848 cri.go:89] found id: ""
	I1217 12:03:56.540668 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.540676 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:56.540687 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:56.540752 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:56.571552 3219848 cri.go:89] found id: ""
	I1217 12:03:56.571586 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.571595 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:56.571602 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:56.571725 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:56.598758 3219848 cri.go:89] found id: ""
	I1217 12:03:56.598835 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.598858 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:56.598878 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:56.598964 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:56.624629 3219848 cri.go:89] found id: ""
	I1217 12:03:56.624659 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.624668 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:56.624675 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:56.624736 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:56.650192 3219848 cri.go:89] found id: ""
	I1217 12:03:56.650214 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.650222 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:56.650229 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:56.650286 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:56.675523 3219848 cri.go:89] found id: ""
	I1217 12:03:56.675548 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.675557 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:56.675563 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:56.675651 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:56.701703 3219848 cri.go:89] found id: ""
	I1217 12:03:56.701731 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.701740 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:56.701751 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:56.701762 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:56.717844 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:56.717877 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:56.837097 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:56.823600    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.824395    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.827077    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.827764    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.829468    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:56.823600    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.824395    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.827077    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.827764    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.829468    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:56.837160 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:56.837195 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:56.864759 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:56.864792 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:56.892589 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:56.892615 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:59.450097 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:59.460573 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:59.460649 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:59.484966 3219848 cri.go:89] found id: ""
	I1217 12:03:59.484992 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.485001 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:59.485007 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:59.485073 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:59.509519 3219848 cri.go:89] found id: ""
	I1217 12:03:59.509545 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.509554 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:59.509561 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:59.509619 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:59.535238 3219848 cri.go:89] found id: ""
	I1217 12:03:59.535307 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.535331 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:59.535351 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:59.535443 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:59.561799 3219848 cri.go:89] found id: ""
	I1217 12:03:59.561823 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.561832 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:59.561839 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:59.561898 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:59.587394 3219848 cri.go:89] found id: ""
	I1217 12:03:59.587416 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.587425 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:59.587431 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:59.587489 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:59.614672 3219848 cri.go:89] found id: ""
	I1217 12:03:59.614695 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.614704 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:59.614712 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:59.614774 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:59.641144 3219848 cri.go:89] found id: ""
	I1217 12:03:59.641171 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.641180 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:59.641187 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:59.641251 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:59.667139 3219848 cri.go:89] found id: ""
	I1217 12:03:59.667167 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.667176 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:59.667184 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:59.667196 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:59.725056 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:59.725091 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:59.741510 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:59.741593 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:59.858554 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:59.849895    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.850546    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.852238    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.852841    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.854544    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:59.849895    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.850546    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.852238    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.852841    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.854544    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:59.858578 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:59.858592 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:59.884457 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:59.884492 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:02.413040 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:02.426774 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:02.426848 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:02.456484 3219848 cri.go:89] found id: ""
	I1217 12:04:02.456587 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.456601 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:02.456609 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:02.456706 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:02.485434 3219848 cri.go:89] found id: ""
	I1217 12:04:02.485506 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.485531 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:02.485547 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:02.485622 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:02.512063 3219848 cri.go:89] found id: ""
	I1217 12:04:02.512100 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.512109 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:02.512116 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:02.512195 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:02.538362 3219848 cri.go:89] found id: ""
	I1217 12:04:02.538433 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.538454 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:02.538462 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:02.538525 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:02.567959 3219848 cri.go:89] found id: ""
	I1217 12:04:02.567994 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.568003 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:02.568009 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:02.568077 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:02.594823 3219848 cri.go:89] found id: ""
	I1217 12:04:02.594860 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.594869 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:02.594876 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:02.594950 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:02.625125 3219848 cri.go:89] found id: ""
	I1217 12:04:02.625196 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.625211 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:02.625219 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:02.625282 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:02.650998 3219848 cri.go:89] found id: ""
	I1217 12:04:02.651033 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.651042 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:02.651051 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:02.651062 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:02.676950 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:02.676984 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:02.711118 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:02.711144 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:02.774152 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:02.774233 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:02.794787 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:02.794862 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:02.886703 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:02.878272    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.878713    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.880270    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.880830    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.882492    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:02.878272    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.878713    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.880270    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.880830    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.882492    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:05.386993 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:05.398225 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:05.398299 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:05.426294 3219848 cri.go:89] found id: ""
	I1217 12:04:05.426321 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.426330 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:05.426337 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:05.426399 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:05.451004 3219848 cri.go:89] found id: ""
	I1217 12:04:05.451027 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.451036 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:05.451049 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:05.451112 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:05.476504 3219848 cri.go:89] found id: ""
	I1217 12:04:05.476532 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.476542 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:05.476549 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:05.476607 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:05.506001 3219848 cri.go:89] found id: ""
	I1217 12:04:05.506028 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.506036 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:05.506043 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:05.506103 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:05.531776 3219848 cri.go:89] found id: ""
	I1217 12:04:05.531803 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.531813 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:05.531820 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:05.531878 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:05.558040 3219848 cri.go:89] found id: ""
	I1217 12:04:05.558068 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.558078 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:05.558085 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:05.558149 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:05.582988 3219848 cri.go:89] found id: ""
	I1217 12:04:05.583024 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.583033 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:05.583040 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:05.583115 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:05.609687 3219848 cri.go:89] found id: ""
	I1217 12:04:05.609725 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.609734 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:05.609744 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:05.609756 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:05.677594 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:05.668798    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.669411    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.671028    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.671605    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.673145    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:05.668798    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.669411    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.671028    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.671605    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.673145    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:05.677661 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:05.677689 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:05.704024 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:05.704062 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:05.736880 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:05.736906 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:05.810417 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:05.810457 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:08.343493 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:08.353931 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:08.354001 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:08.377982 3219848 cri.go:89] found id: ""
	I1217 12:04:08.378050 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.378062 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:08.378069 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:08.378160 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:08.402837 3219848 cri.go:89] found id: ""
	I1217 12:04:08.402870 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.402880 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:08.402886 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:08.402956 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:08.430641 3219848 cri.go:89] found id: ""
	I1217 12:04:08.430666 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.430675 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:08.430682 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:08.430747 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:08.455904 3219848 cri.go:89] found id: ""
	I1217 12:04:08.455937 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.455947 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:08.455954 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:08.456020 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:08.480357 3219848 cri.go:89] found id: ""
	I1217 12:04:08.480388 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.480398 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:08.480405 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:08.480506 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:08.505595 3219848 cri.go:89] found id: ""
	I1217 12:04:08.505629 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.505682 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:08.505701 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:08.505765 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:08.531028 3219848 cri.go:89] found id: ""
	I1217 12:04:08.531065 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.531074 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:08.531081 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:08.531156 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:08.559015 3219848 cri.go:89] found id: ""
	I1217 12:04:08.559051 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.559060 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:08.559069 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:08.559081 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:08.574853 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:08.574883 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:08.640119 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:08.631556    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.632320    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.634049    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.634630    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.635699    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:08.631556    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.632320    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.634049    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.634630    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.635699    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:08.640141 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:08.640154 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:08.666054 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:08.666091 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:08.694523 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:08.694553 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:11.260393 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:11.271847 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:11.271939 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:11.297537 3219848 cri.go:89] found id: ""
	I1217 12:04:11.297559 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.297568 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:11.297574 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:11.297669 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:11.326252 3219848 cri.go:89] found id: ""
	I1217 12:04:11.326279 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.326288 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:11.326295 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:11.326354 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:11.354965 3219848 cri.go:89] found id: ""
	I1217 12:04:11.354991 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.355013 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:11.355020 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:11.355085 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:11.379623 3219848 cri.go:89] found id: ""
	I1217 12:04:11.379649 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.379657 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:11.379664 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:11.379730 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:11.405089 3219848 cri.go:89] found id: ""
	I1217 12:04:11.405157 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.405185 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:11.405200 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:11.405276 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:11.431039 3219848 cri.go:89] found id: ""
	I1217 12:04:11.431064 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.431073 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:11.431079 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:11.431138 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:11.456294 3219848 cri.go:89] found id: ""
	I1217 12:04:11.456329 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.456338 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:11.456345 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:11.456437 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:11.485568 3219848 cri.go:89] found id: ""
	I1217 12:04:11.485595 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.485604 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:11.485613 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:11.485628 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:11.542231 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:11.542268 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:11.559119 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:11.559201 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:11.628507 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:11.619906    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.620667    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.622406    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.622904    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.624511    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:11.619906    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.620667    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.622406    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.622904    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.624511    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:11.628580 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:11.628617 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:11.654658 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:11.654692 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:14.187317 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:14.200950 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:14.201028 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:14.225871 3219848 cri.go:89] found id: ""
	I1217 12:04:14.225907 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.225917 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:14.225924 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:14.225982 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:14.255169 3219848 cri.go:89] found id: ""
	I1217 12:04:14.255194 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.255203 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:14.255210 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:14.255270 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:14.279884 3219848 cri.go:89] found id: ""
	I1217 12:04:14.279914 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.279928 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:14.279935 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:14.279993 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:14.303876 3219848 cri.go:89] found id: ""
	I1217 12:04:14.303902 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.303911 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:14.303918 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:14.303982 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:14.329876 3219848 cri.go:89] found id: ""
	I1217 12:04:14.329902 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.329911 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:14.329924 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:14.329993 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:14.355681 3219848 cri.go:89] found id: ""
	I1217 12:04:14.355707 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.355723 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:14.355730 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:14.355791 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:14.380557 3219848 cri.go:89] found id: ""
	I1217 12:04:14.380582 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.380591 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:14.380607 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:14.380669 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:14.406559 3219848 cri.go:89] found id: ""
	I1217 12:04:14.406626 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.406652 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:14.406671 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:14.406684 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:14.435535 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:14.435567 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:14.496057 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:14.496100 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:14.512036 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:14.512068 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:14.581215 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:14.571459    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.572243    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.574240    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.574925    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.576493    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:14.571459    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.572243    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.574240    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.574925    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.576493    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:14.581280 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:14.581299 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:17.108603 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:17.119638 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:17.119710 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:17.144879 3219848 cri.go:89] found id: ""
	I1217 12:04:17.144901 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.144909 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:17.144915 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:17.144976 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:17.169341 3219848 cri.go:89] found id: ""
	I1217 12:04:17.169366 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.169375 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:17.169381 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:17.169440 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:17.193770 3219848 cri.go:89] found id: ""
	I1217 12:04:17.193792 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.193800 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:17.193806 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:17.193867 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:17.218766 3219848 cri.go:89] found id: ""
	I1217 12:04:17.218788 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.218797 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:17.218804 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:17.218911 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:17.246745 3219848 cri.go:89] found id: ""
	I1217 12:04:17.246768 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.246777 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:17.246783 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:17.246844 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:17.271877 3219848 cri.go:89] found id: ""
	I1217 12:04:17.271898 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.271907 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:17.271914 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:17.271971 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:17.296098 3219848 cri.go:89] found id: ""
	I1217 12:04:17.296124 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.296133 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:17.296140 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:17.296202 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:17.321740 3219848 cri.go:89] found id: ""
	I1217 12:04:17.321767 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.321777 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:17.321788 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:17.321799 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:17.378911 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:17.378944 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:17.395425 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:17.395454 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:17.458148 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:17.450570    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.450926    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.452495    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.452908    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.454301    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:17.450570    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.450926    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.452495    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.452908    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.454301    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:17.458172 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:17.458185 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:17.483130 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:17.483199 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:20.011622 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:20.036129 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:20.036210 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:20.069785 3219848 cri.go:89] found id: ""
	I1217 12:04:20.069812 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.069820 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:20.069826 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:20.069891 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:20.118138 3219848 cri.go:89] found id: ""
	I1217 12:04:20.118165 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.118174 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:20.118180 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:20.118287 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:20.145219 3219848 cri.go:89] found id: ""
	I1217 12:04:20.145246 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.145267 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:20.145274 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:20.145340 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:20.171515 3219848 cri.go:89] found id: ""
	I1217 12:04:20.171541 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.171549 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:20.171556 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:20.171615 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:20.198371 3219848 cri.go:89] found id: ""
	I1217 12:04:20.198393 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.198409 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:20.198416 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:20.198476 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:20.226505 3219848 cri.go:89] found id: ""
	I1217 12:04:20.226529 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.226538 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:20.226544 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:20.226604 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:20.251848 3219848 cri.go:89] found id: ""
	I1217 12:04:20.251874 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.251883 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:20.251890 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:20.251951 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:20.281838 3219848 cri.go:89] found id: ""
	I1217 12:04:20.281863 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.281872 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:20.281887 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:20.281899 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:20.344875 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:20.336196    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.336887    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.338603    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.339150    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.340924    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:20.336196    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.336887    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.338603    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.339150    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.340924    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:20.344897 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:20.344909 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:20.370205 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:20.370244 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:20.403171 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:20.403203 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:20.459306 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:20.459342 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:22.976954 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:22.987706 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:22.987785 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:23.048240 3219848 cri.go:89] found id: ""
	I1217 12:04:23.048267 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.048276 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:23.048282 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:23.048342 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:23.098972 3219848 cri.go:89] found id: ""
	I1217 12:04:23.099001 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.099041 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:23.099055 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:23.099142 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:23.130170 3219848 cri.go:89] found id: ""
	I1217 12:04:23.130192 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.130201 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:23.130207 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:23.130266 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:23.157897 3219848 cri.go:89] found id: ""
	I1217 12:04:23.157919 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.157927 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:23.157933 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:23.157990 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:23.186732 3219848 cri.go:89] found id: ""
	I1217 12:04:23.186757 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.186766 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:23.186772 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:23.186834 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:23.211252 3219848 cri.go:89] found id: ""
	I1217 12:04:23.211278 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.211287 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:23.211294 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:23.211360 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:23.235484 3219848 cri.go:89] found id: ""
	I1217 12:04:23.235507 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.235516 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:23.235523 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:23.235593 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:23.263167 3219848 cri.go:89] found id: ""
	I1217 12:04:23.263195 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.263204 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:23.263213 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:23.263224 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:23.319468 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:23.319503 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:23.335277 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:23.335309 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:23.401412 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:23.393032    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.393444    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.395045    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.395905    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.397587    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:23.393032    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.393444    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.395045    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.395905    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.397587    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:23.401435 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:23.401447 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:23.427002 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:23.427042 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:25.955964 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:25.966813 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:25.966907 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:25.991674 3219848 cri.go:89] found id: ""
	I1217 12:04:25.991698 3219848 logs.go:282] 0 containers: []
	W1217 12:04:25.991707 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:25.991714 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:25.991828 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:26.043851 3219848 cri.go:89] found id: ""
	I1217 12:04:26.043878 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.043888 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:26.043895 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:26.043963 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:26.099675 3219848 cri.go:89] found id: ""
	I1217 12:04:26.099700 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.099708 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:26.099714 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:26.099786 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:26.129744 3219848 cri.go:89] found id: ""
	I1217 12:04:26.129768 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.129776 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:26.129783 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:26.129849 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:26.155393 3219848 cri.go:89] found id: ""
	I1217 12:04:26.155420 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.155428 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:26.155434 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:26.155492 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:26.182178 3219848 cri.go:89] found id: ""
	I1217 12:04:26.182200 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.182209 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:26.182216 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:26.182277 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:26.206976 3219848 cri.go:89] found id: ""
	I1217 12:04:26.207000 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.207009 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:26.207015 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:26.207072 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:26.231357 3219848 cri.go:89] found id: ""
	I1217 12:04:26.231383 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.231391 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:26.231400 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:26.231411 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:26.287609 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:26.287646 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:26.303654 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:26.303701 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:26.372084 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:26.363097    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.363759    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.365390    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.366039    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.367715    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:26.363097    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.363759    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.365390    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.366039    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.367715    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:26.372107 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:26.372122 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:26.398349 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:26.398386 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:28.926935 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:28.938567 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:28.938637 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:28.965018 3219848 cri.go:89] found id: ""
	I1217 12:04:28.965042 3219848 logs.go:282] 0 containers: []
	W1217 12:04:28.965050 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:28.965056 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:28.965116 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:28.993619 3219848 cri.go:89] found id: ""
	I1217 12:04:28.993646 3219848 logs.go:282] 0 containers: []
	W1217 12:04:28.993654 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:28.993661 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:28.993723 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:29.042253 3219848 cri.go:89] found id: ""
	I1217 12:04:29.042274 3219848 logs.go:282] 0 containers: []
	W1217 12:04:29.042282 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:29.042289 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:29.042347 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:29.109464 3219848 cri.go:89] found id: ""
	I1217 12:04:29.109486 3219848 logs.go:282] 0 containers: []
	W1217 12:04:29.109495 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:29.109501 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:29.109563 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:29.139820 3219848 cri.go:89] found id: ""
	I1217 12:04:29.139842 3219848 logs.go:282] 0 containers: []
	W1217 12:04:29.139850 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:29.139857 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:29.139917 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:29.165440 3219848 cri.go:89] found id: ""
	I1217 12:04:29.165465 3219848 logs.go:282] 0 containers: []
	W1217 12:04:29.165474 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:29.165481 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:29.165543 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:29.191572 3219848 cri.go:89] found id: ""
	I1217 12:04:29.191597 3219848 logs.go:282] 0 containers: []
	W1217 12:04:29.191606 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:29.191613 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:29.191673 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:29.217986 3219848 cri.go:89] found id: ""
	I1217 12:04:29.218011 3219848 logs.go:282] 0 containers: []
	W1217 12:04:29.218020 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:29.218030 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:29.218041 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:29.274933 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:29.274967 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:29.290733 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:29.290760 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:29.358661 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:29.349933    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.350736    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.352310    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.352861    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.354377    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:29.349933    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.350736    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.352310    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.352861    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.354377    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:29.358683 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:29.358697 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:29.385070 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:29.385107 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:31.914639 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:31.928018 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:31.928092 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:31.955140 3219848 cri.go:89] found id: ""
	I1217 12:04:31.955163 3219848 logs.go:282] 0 containers: []
	W1217 12:04:31.955171 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:31.955178 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:31.955252 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:31.982332 3219848 cri.go:89] found id: ""
	I1217 12:04:31.982364 3219848 logs.go:282] 0 containers: []
	W1217 12:04:31.982380 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:31.982387 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:31.982448 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:32.045708 3219848 cri.go:89] found id: ""
	I1217 12:04:32.045731 3219848 logs.go:282] 0 containers: []
	W1217 12:04:32.045740 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:32.045746 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:32.045805 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:32.093198 3219848 cri.go:89] found id: ""
	I1217 12:04:32.093220 3219848 logs.go:282] 0 containers: []
	W1217 12:04:32.093229 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:32.093242 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:32.093301 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:32.120574 3219848 cri.go:89] found id: ""
	I1217 12:04:32.120641 3219848 logs.go:282] 0 containers: []
	W1217 12:04:32.120664 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:32.120684 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:32.120772 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:32.151069 3219848 cri.go:89] found id: ""
	I1217 12:04:32.151137 3219848 logs.go:282] 0 containers: []
	W1217 12:04:32.151160 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:32.151182 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:32.151272 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:32.181226 3219848 cri.go:89] found id: ""
	I1217 12:04:32.181303 3219848 logs.go:282] 0 containers: []
	W1217 12:04:32.181326 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:32.181347 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:32.181439 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:32.207237 3219848 cri.go:89] found id: ""
	I1217 12:04:32.207295 3219848 logs.go:282] 0 containers: []
	W1217 12:04:32.207310 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:32.207324 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:32.207336 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:32.263771 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:32.263808 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:32.279666 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:32.279693 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:32.345645 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:32.336846    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.338076    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.338956    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.340462    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.340816    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:32.336846    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.338076    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.338956    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.340462    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.340816    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:32.345666 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:32.345679 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:32.371311 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:32.371347 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:34.899829 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:34.911276 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:34.911354 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:34.936056 3219848 cri.go:89] found id: ""
	I1217 12:04:34.936080 3219848 logs.go:282] 0 containers: []
	W1217 12:04:34.936089 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:34.936096 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:34.936156 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:34.962166 3219848 cri.go:89] found id: ""
	I1217 12:04:34.962192 3219848 logs.go:282] 0 containers: []
	W1217 12:04:34.962201 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:34.962207 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:34.962271 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:34.987891 3219848 cri.go:89] found id: ""
	I1217 12:04:34.987916 3219848 logs.go:282] 0 containers: []
	W1217 12:04:34.987926 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:34.987934 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:34.987994 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:35.036291 3219848 cri.go:89] found id: ""
	I1217 12:04:35.036319 3219848 logs.go:282] 0 containers: []
	W1217 12:04:35.036331 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:35.036339 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:35.036402 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:35.091997 3219848 cri.go:89] found id: ""
	I1217 12:04:35.092023 3219848 logs.go:282] 0 containers: []
	W1217 12:04:35.092041 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:35.092049 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:35.092119 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:35.126699 3219848 cri.go:89] found id: ""
	I1217 12:04:35.126721 3219848 logs.go:282] 0 containers: []
	W1217 12:04:35.126736 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:35.126743 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:35.126802 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:35.152052 3219848 cri.go:89] found id: ""
	I1217 12:04:35.152077 3219848 logs.go:282] 0 containers: []
	W1217 12:04:35.152087 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:35.152094 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:35.152156 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:35.177868 3219848 cri.go:89] found id: ""
	I1217 12:04:35.177897 3219848 logs.go:282] 0 containers: []
	W1217 12:04:35.177906 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:35.177916 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:35.177955 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:35.213172 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:35.213200 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:35.269771 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:35.269807 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:35.285802 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:35.285841 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:35.355953 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:35.345556    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.346168    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.348018    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.348336    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.351480    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:35.345556    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.346168    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.348018    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.348336    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.351480    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:35.355976 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:35.355988 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:37.883397 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:37.894032 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:37.894101 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:37.927040 3219848 cri.go:89] found id: ""
	I1217 12:04:37.927066 3219848 logs.go:282] 0 containers: []
	W1217 12:04:37.927075 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:37.927085 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:37.927150 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:37.951890 3219848 cri.go:89] found id: ""
	I1217 12:04:37.951916 3219848 logs.go:282] 0 containers: []
	W1217 12:04:37.951925 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:37.951931 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:37.951995 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:37.978258 3219848 cri.go:89] found id: ""
	I1217 12:04:37.978286 3219848 logs.go:282] 0 containers: []
	W1217 12:04:37.978295 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:37.978302 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:37.978383 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:38.032665 3219848 cri.go:89] found id: ""
	I1217 12:04:38.032689 3219848 logs.go:282] 0 containers: []
	W1217 12:04:38.032698 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:38.032705 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:38.032770 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:38.068588 3219848 cri.go:89] found id: ""
	I1217 12:04:38.068617 3219848 logs.go:282] 0 containers: []
	W1217 12:04:38.068626 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:38.068633 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:38.068703 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:38.111074 3219848 cri.go:89] found id: ""
	I1217 12:04:38.111102 3219848 logs.go:282] 0 containers: []
	W1217 12:04:38.111112 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:38.111119 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:38.111183 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:38.139962 3219848 cri.go:89] found id: ""
	I1217 12:04:38.139989 3219848 logs.go:282] 0 containers: []
	W1217 12:04:38.139998 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:38.140005 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:38.140071 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:38.165120 3219848 cri.go:89] found id: ""
	I1217 12:04:38.165147 3219848 logs.go:282] 0 containers: []
	W1217 12:04:38.165156 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:38.165165 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:38.165176 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:38.221183 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:38.221218 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:38.237532 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:38.237565 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:38.307341 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:38.299115    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.299933    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.301496    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.301856    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.303152    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:38.299115    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.299933    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.301496    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.301856    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.303152    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:38.307362 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:38.307376 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:38.333705 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:38.333739 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:40.864326 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:40.875421 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:40.875500 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:40.900554 3219848 cri.go:89] found id: ""
	I1217 12:04:40.900576 3219848 logs.go:282] 0 containers: []
	W1217 12:04:40.900586 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:40.900592 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:40.900654 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:40.926107 3219848 cri.go:89] found id: ""
	I1217 12:04:40.926134 3219848 logs.go:282] 0 containers: []
	W1217 12:04:40.926143 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:40.926151 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:40.926210 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:40.951315 3219848 cri.go:89] found id: ""
	I1217 12:04:40.951341 3219848 logs.go:282] 0 containers: []
	W1217 12:04:40.951350 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:40.951356 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:40.951414 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:40.976682 3219848 cri.go:89] found id: ""
	I1217 12:04:40.976713 3219848 logs.go:282] 0 containers: []
	W1217 12:04:40.976723 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:40.976731 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:40.976790 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:41.016365 3219848 cri.go:89] found id: ""
	I1217 12:04:41.016388 3219848 logs.go:282] 0 containers: []
	W1217 12:04:41.016396 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:41.016403 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:41.016527 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:41.081810 3219848 cri.go:89] found id: ""
	I1217 12:04:41.081838 3219848 logs.go:282] 0 containers: []
	W1217 12:04:41.081848 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:41.081856 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:41.081915 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:41.107919 3219848 cri.go:89] found id: ""
	I1217 12:04:41.107946 3219848 logs.go:282] 0 containers: []
	W1217 12:04:41.107955 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:41.107962 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:41.108032 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:41.134563 3219848 cri.go:89] found id: ""
	I1217 12:04:41.134589 3219848 logs.go:282] 0 containers: []
	W1217 12:04:41.134599 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:41.134608 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:41.134619 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:41.192325 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:41.192362 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:41.208694 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:41.208723 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:41.279184 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:41.267762    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.268617    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.272813    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.273616    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.275116    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:41.267762    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.268617    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.272813    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.273616    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.275116    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:41.279207 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:41.279221 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:41.305398 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:41.305436 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:43.838273 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:43.849251 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:43.849321 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:43.873599 3219848 cri.go:89] found id: ""
	I1217 12:04:43.873671 3219848 logs.go:282] 0 containers: []
	W1217 12:04:43.873686 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:43.873694 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:43.873756 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:43.902353 3219848 cri.go:89] found id: ""
	I1217 12:04:43.902378 3219848 logs.go:282] 0 containers: []
	W1217 12:04:43.902388 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:43.902395 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:43.902486 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:43.928175 3219848 cri.go:89] found id: ""
	I1217 12:04:43.928202 3219848 logs.go:282] 0 containers: []
	W1217 12:04:43.928213 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:43.928220 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:43.928334 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:43.956883 3219848 cri.go:89] found id: ""
	I1217 12:04:43.956912 3219848 logs.go:282] 0 containers: []
	W1217 12:04:43.956921 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:43.956927 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:43.956996 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:43.982931 3219848 cri.go:89] found id: ""
	I1217 12:04:43.982968 3219848 logs.go:282] 0 containers: []
	W1217 12:04:43.982979 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:43.982986 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:43.983053 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:44.030268 3219848 cri.go:89] found id: ""
	I1217 12:04:44.030294 3219848 logs.go:282] 0 containers: []
	W1217 12:04:44.030304 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:44.030311 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:44.030388 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:44.082991 3219848 cri.go:89] found id: ""
	I1217 12:04:44.083021 3219848 logs.go:282] 0 containers: []
	W1217 12:04:44.083042 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:44.083049 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:44.083140 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:44.113120 3219848 cri.go:89] found id: ""
	I1217 12:04:44.113165 3219848 logs.go:282] 0 containers: []
	W1217 12:04:44.113175 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:44.113185 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:44.113204 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:44.172933 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:44.172970 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:44.189039 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:44.189066 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:44.257898 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:44.249336    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.250068    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.251815    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.252362    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.254049    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:44.249336    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.250068    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.251815    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.252362    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.254049    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:44.257924 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:44.257937 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:44.283680 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:44.283715 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:46.821352 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:46.832441 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:46.832520 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:46.858364 3219848 cri.go:89] found id: ""
	I1217 12:04:46.858390 3219848 logs.go:282] 0 containers: []
	W1217 12:04:46.858400 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:46.858407 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:46.858488 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:46.882836 3219848 cri.go:89] found id: ""
	I1217 12:04:46.882868 3219848 logs.go:282] 0 containers: []
	W1217 12:04:46.882876 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:46.882883 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:46.882952 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:46.907815 3219848 cri.go:89] found id: ""
	I1217 12:04:46.907852 3219848 logs.go:282] 0 containers: []
	W1217 12:04:46.907861 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:46.907888 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:46.907972 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:46.933329 3219848 cri.go:89] found id: ""
	I1217 12:04:46.933353 3219848 logs.go:282] 0 containers: []
	W1217 12:04:46.933363 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:46.933377 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:46.933445 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:46.959520 3219848 cri.go:89] found id: ""
	I1217 12:04:46.959546 3219848 logs.go:282] 0 containers: []
	W1217 12:04:46.959555 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:46.959562 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:46.959621 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:46.986527 3219848 cri.go:89] found id: ""
	I1217 12:04:46.986551 3219848 logs.go:282] 0 containers: []
	W1217 12:04:46.986561 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:46.986567 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:46.986627 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:47.034742 3219848 cri.go:89] found id: ""
	I1217 12:04:47.034765 3219848 logs.go:282] 0 containers: []
	W1217 12:04:47.034775 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:47.034781 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:47.034838 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:47.072115 3219848 cri.go:89] found id: ""
	I1217 12:04:47.072143 3219848 logs.go:282] 0 containers: []
	W1217 12:04:47.072152 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:47.072161 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:47.072173 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:47.138106 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:47.138141 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:47.156338 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:47.156381 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:47.224864 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:47.215946    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.216453    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.218361    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.219127    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.220895    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:47.215946    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.216453    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.218361    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.219127    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.220895    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:47.224889 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:47.224900 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:47.250608 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:47.250644 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:49.780985 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:49.791927 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:49.792002 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:49.817502 3219848 cri.go:89] found id: ""
	I1217 12:04:49.817526 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.817536 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:49.817542 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:49.817621 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:49.844464 3219848 cri.go:89] found id: ""
	I1217 12:04:49.844490 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.844499 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:49.844506 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:49.844614 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:49.874956 3219848 cri.go:89] found id: ""
	I1217 12:04:49.874982 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.874991 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:49.874998 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:49.875079 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:49.904772 3219848 cri.go:89] found id: ""
	I1217 12:04:49.904795 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.904804 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:49.904810 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:49.904872 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:49.934337 3219848 cri.go:89] found id: ""
	I1217 12:04:49.934362 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.934372 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:49.934379 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:49.934472 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:49.959338 3219848 cri.go:89] found id: ""
	I1217 12:04:49.959362 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.959371 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:49.959378 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:49.959481 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:49.984578 3219848 cri.go:89] found id: ""
	I1217 12:04:49.984606 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.984614 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:49.984621 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:49.984679 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:50.043309 3219848 cri.go:89] found id: ""
	I1217 12:04:50.043395 3219848 logs.go:282] 0 containers: []
	W1217 12:04:50.043419 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:50.043456 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:50.043486 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:50.135752 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:50.127538    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.128073    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.129753    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.130124    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.131773    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:50.127538    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.128073    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.129753    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.130124    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.131773    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:50.135777 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:50.135792 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:50.162030 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:50.162067 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:50.196447 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:50.196478 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:50.254281 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:50.254318 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:52.772408 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:52.783553 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:52.783633 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:52.820008 3219848 cri.go:89] found id: ""
	I1217 12:04:52.820043 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.820058 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:52.820065 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:52.820129 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:52.844905 3219848 cri.go:89] found id: ""
	I1217 12:04:52.844941 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.844949 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:52.844956 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:52.845029 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:52.869543 3219848 cri.go:89] found id: ""
	I1217 12:04:52.869569 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.869586 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:52.869622 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:52.869698 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:52.894131 3219848 cri.go:89] found id: ""
	I1217 12:04:52.894160 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.894170 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:52.894177 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:52.894266 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:52.921694 3219848 cri.go:89] found id: ""
	I1217 12:04:52.921719 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.921729 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:52.921736 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:52.921795 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:52.947377 3219848 cri.go:89] found id: ""
	I1217 12:04:52.947411 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.947421 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:52.947452 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:52.947531 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:52.972742 3219848 cri.go:89] found id: ""
	I1217 12:04:52.972768 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.972777 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:52.972787 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:52.972866 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:53.016484 3219848 cri.go:89] found id: ""
	I1217 12:04:53.016566 3219848 logs.go:282] 0 containers: []
	W1217 12:04:53.016588 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:53.016612 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:53.016657 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:53.091083 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:53.091153 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:53.109051 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:53.109075 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:53.174985 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:53.166259    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.167099    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.168974    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.169308    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.170842    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:53.166259    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.167099    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.168974    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.169308    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.170842    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:53.175008 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:53.175021 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:53.201645 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:53.201680 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:55.729262 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:55.742969 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:55.743043 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:55.772352 3219848 cri.go:89] found id: ""
	I1217 12:04:55.772374 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.772383 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:55.772389 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:55.772461 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:55.799085 3219848 cri.go:89] found id: ""
	I1217 12:04:55.799111 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.799120 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:55.799126 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:55.799191 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:55.825805 3219848 cri.go:89] found id: ""
	I1217 12:04:55.825830 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.825839 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:55.825846 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:55.825907 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:55.855875 3219848 cri.go:89] found id: ""
	I1217 12:04:55.855964 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.855979 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:55.855987 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:55.856055 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:55.881512 3219848 cri.go:89] found id: ""
	I1217 12:04:55.881539 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.881548 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:55.881555 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:55.881615 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:55.911117 3219848 cri.go:89] found id: ""
	I1217 12:04:55.911149 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.911158 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:55.911165 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:55.911236 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:55.936738 3219848 cri.go:89] found id: ""
	I1217 12:04:55.936774 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.936783 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:55.936790 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:55.936865 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:55.962878 3219848 cri.go:89] found id: ""
	I1217 12:04:55.962904 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.962918 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:55.962937 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:55.962950 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:55.991943 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:55.991988 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:56.062887 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:56.062922 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:56.129315 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:56.129356 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:56.145986 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:56.146013 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:56.214623 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:56.205795    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.206560    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.208125    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.208507    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.210116    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:56.205795    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.206560    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.208125    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.208507    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.210116    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:58.715974 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:58.727395 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:58.727466 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:58.751936 3219848 cri.go:89] found id: ""
	I1217 12:04:58.751961 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.751970 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:58.751977 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:58.752036 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:58.778416 3219848 cri.go:89] found id: ""
	I1217 12:04:58.778439 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.778447 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:58.778454 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:58.778517 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:58.806136 3219848 cri.go:89] found id: ""
	I1217 12:04:58.806160 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.806169 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:58.806175 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:58.806233 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:58.835276 3219848 cri.go:89] found id: ""
	I1217 12:04:58.835311 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.835321 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:58.835328 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:58.835396 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:58.862517 3219848 cri.go:89] found id: ""
	I1217 12:04:58.862596 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.862612 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:58.862620 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:58.862695 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:58.888027 3219848 cri.go:89] found id: ""
	I1217 12:04:58.888055 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.888065 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:58.888072 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:58.888156 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:58.913027 3219848 cri.go:89] found id: ""
	I1217 12:04:58.913106 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.913123 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:58.913132 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:58.913210 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:58.938554 3219848 cri.go:89] found id: ""
	I1217 12:04:58.938578 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.938587 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:58.938599 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:58.938611 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:58.995142 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:58.995175 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:59.026309 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:59.026388 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:59.124135 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:59.115677    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.117093    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.117405    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.118755    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.119195    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:59.115677    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.117093    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.117405    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.118755    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.119195    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:59.124157 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:59.124170 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:59.149882 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:59.149925 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:01.680518 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:01.692630 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:01.692709 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:01.721621 3219848 cri.go:89] found id: ""
	I1217 12:05:01.721647 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.721656 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:01.721664 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:01.721731 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:01.748186 3219848 cri.go:89] found id: ""
	I1217 12:05:01.748213 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.748232 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:01.748239 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:01.748310 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:01.774670 3219848 cri.go:89] found id: ""
	I1217 12:05:01.774694 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.774703 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:01.774709 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:01.774770 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:01.800533 3219848 cri.go:89] found id: ""
	I1217 12:05:01.800609 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.800635 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:01.800649 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:01.800726 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:01.832193 3219848 cri.go:89] found id: ""
	I1217 12:05:01.832221 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.832230 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:01.832238 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:01.832314 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:01.859699 3219848 cri.go:89] found id: ""
	I1217 12:05:01.859733 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.859743 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:01.859750 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:01.859825 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:01.891844 3219848 cri.go:89] found id: ""
	I1217 12:05:01.891869 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.891893 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:01.891901 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:01.891988 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:01.922765 3219848 cri.go:89] found id: ""
	I1217 12:05:01.922791 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.922801 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:01.922811 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:01.922821 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:01.984618 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:01.984654 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:02.003531 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:02.003573 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:02.119039 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:02.109047    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.109431    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.111503    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.112283    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.113931    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:02.109047    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.109431    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.111503    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.112283    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.113931    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:02.119062 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:02.119074 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:02.145052 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:02.145090 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:04.675110 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:04.686658 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:04.686731 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:04.714143 3219848 cri.go:89] found id: ""
	I1217 12:05:04.714169 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.714178 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:04.714185 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:04.714246 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:04.741446 3219848 cri.go:89] found id: ""
	I1217 12:05:04.741472 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.741481 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:04.741488 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:04.741549 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:04.771197 3219848 cri.go:89] found id: ""
	I1217 12:05:04.771224 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.771234 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:04.771241 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:04.771305 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:04.801798 3219848 cri.go:89] found id: ""
	I1217 12:05:04.801824 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.801834 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:04.801840 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:04.801901 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:04.827212 3219848 cri.go:89] found id: ""
	I1217 12:05:04.827240 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.827249 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:04.827257 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:04.827322 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:04.852794 3219848 cri.go:89] found id: ""
	I1217 12:05:04.852821 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.852831 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:04.852838 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:04.852898 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:04.879034 3219848 cri.go:89] found id: ""
	I1217 12:05:04.879058 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.879069 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:04.879075 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:04.879134 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:04.904782 3219848 cri.go:89] found id: ""
	I1217 12:05:04.904806 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.904814 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:04.904823 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:04.904833 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:04.961550 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:04.961581 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:04.977831 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:04.977861 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:05.101127 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:05.083862    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.093276    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.094908    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.095507    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.097102    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:05.083862    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.093276    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.094908    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.095507    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.097102    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:05.101155 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:05.101168 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:05.128517 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:05.128550 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:07.660217 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:07.670837 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:07.670907 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:07.696773 3219848 cri.go:89] found id: ""
	I1217 12:05:07.696800 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.696809 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:07.696816 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:07.696873 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:07.722665 3219848 cri.go:89] found id: ""
	I1217 12:05:07.722688 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.722697 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:07.722703 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:07.722770 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:07.748882 3219848 cri.go:89] found id: ""
	I1217 12:05:07.748907 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.748916 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:07.748922 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:07.748983 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:07.777951 3219848 cri.go:89] found id: ""
	I1217 12:05:07.777976 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.777985 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:07.777992 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:07.778052 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:07.807386 3219848 cri.go:89] found id: ""
	I1217 12:05:07.807414 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.807423 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:07.807430 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:07.807492 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:07.836910 3219848 cri.go:89] found id: ""
	I1217 12:05:07.836938 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.836947 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:07.836954 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:07.837012 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:07.861301 3219848 cri.go:89] found id: ""
	I1217 12:05:07.861327 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.861337 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:07.861343 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:07.861402 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:07.885389 3219848 cri.go:89] found id: ""
	I1217 12:05:07.885412 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.885422 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:07.885431 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:07.885444 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:07.940922 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:07.940954 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:07.956764 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:07.956792 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:08.040092 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:08.023500    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.024045    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.030582    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.031321    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.035763    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:08.023500    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.024045    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.030582    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.031321    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.035763    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:08.040167 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:08.040195 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:08.076595 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:08.076674 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:10.614548 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:10.625273 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:10.625344 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:10.649744 3219848 cri.go:89] found id: ""
	I1217 12:05:10.649774 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.649782 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:10.649789 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:10.649847 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:10.673909 3219848 cri.go:89] found id: ""
	I1217 12:05:10.673936 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.673945 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:10.673952 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:10.674010 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:10.699817 3219848 cri.go:89] found id: ""
	I1217 12:05:10.699840 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.699849 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:10.699855 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:10.699914 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:10.724608 3219848 cri.go:89] found id: ""
	I1217 12:05:10.724630 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.724638 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:10.724645 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:10.724702 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:10.756858 3219848 cri.go:89] found id: ""
	I1217 12:05:10.756883 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.756892 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:10.756899 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:10.756959 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:10.787011 3219848 cri.go:89] found id: ""
	I1217 12:05:10.787037 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.787046 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:10.787052 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:10.787111 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:10.816658 3219848 cri.go:89] found id: ""
	I1217 12:05:10.816683 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.816691 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:10.816698 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:10.816757 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:10.841817 3219848 cri.go:89] found id: ""
	I1217 12:05:10.841882 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.841899 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:10.841909 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:10.841920 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:10.899952 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:10.899994 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:10.915585 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:10.915615 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:10.983597 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:10.975197    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.975853    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.977490    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.978147    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.979658    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:10.975197    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.975853    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.977490    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.978147    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.979658    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:10.983619 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:10.983636 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:11.013827 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:11.013865 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:13.590017 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:13.601224 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:13.601300 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:13.630748 3219848 cri.go:89] found id: ""
	I1217 12:05:13.630771 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.630781 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:13.630788 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:13.630845 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:13.659125 3219848 cri.go:89] found id: ""
	I1217 12:05:13.659150 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.659160 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:13.659166 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:13.659224 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:13.689040 3219848 cri.go:89] found id: ""
	I1217 12:05:13.689066 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.689075 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:13.689082 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:13.689149 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:13.713917 3219848 cri.go:89] found id: ""
	I1217 12:05:13.713941 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.713949 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:13.713956 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:13.714016 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:13.738663 3219848 cri.go:89] found id: ""
	I1217 12:05:13.738686 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.738695 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:13.738701 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:13.738759 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:13.762897 3219848 cri.go:89] found id: ""
	I1217 12:05:13.762922 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.762931 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:13.762938 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:13.762995 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:13.791695 3219848 cri.go:89] found id: ""
	I1217 12:05:13.791720 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.791736 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:13.791743 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:13.791800 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:13.821207 3219848 cri.go:89] found id: ""
	I1217 12:05:13.821230 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.821239 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:13.821248 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:13.821259 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:13.848837 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:13.848867 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:13.906239 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:13.906278 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:13.921882 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:13.921917 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:13.991574 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:13.982172    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.983086    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.985111    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.985659    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.986629    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:13.982172    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.983086    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.985111    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.985659    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.986629    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:13.991596 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:13.991609 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:16.525032 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:16.535486 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:16.535556 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:16.561699 3219848 cri.go:89] found id: ""
	I1217 12:05:16.561721 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.561730 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:16.561736 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:16.561792 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:16.586264 3219848 cri.go:89] found id: ""
	I1217 12:05:16.586287 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.586296 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:16.586303 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:16.586360 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:16.611385 3219848 cri.go:89] found id: ""
	I1217 12:05:16.611409 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.611418 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:16.611425 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:16.611485 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:16.636230 3219848 cri.go:89] found id: ""
	I1217 12:05:16.636256 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.636267 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:16.636274 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:16.636332 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:16.660919 3219848 cri.go:89] found id: ""
	I1217 12:05:16.660942 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.660950 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:16.660956 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:16.661013 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:16.688962 3219848 cri.go:89] found id: ""
	I1217 12:05:16.688987 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.688996 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:16.689003 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:16.689070 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:16.719405 3219848 cri.go:89] found id: ""
	I1217 12:05:16.719428 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.719437 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:16.719443 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:16.719502 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:16.745166 3219848 cri.go:89] found id: ""
	I1217 12:05:16.745192 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.745201 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:16.745211 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:16.745223 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:16.771975 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:16.772014 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:16.804149 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:16.804180 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:16.861212 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:16.861249 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:16.877226 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:16.877257 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:16.943896 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:16.935292    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.935946    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.937663    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.938200    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.939861    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:16.935292    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.935946    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.937663    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.938200    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.939861    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:19.444922 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:19.455525 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:19.455598 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:19.480970 3219848 cri.go:89] found id: ""
	I1217 12:05:19.480995 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.481006 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:19.481017 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:19.481079 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:19.506235 3219848 cri.go:89] found id: ""
	I1217 12:05:19.506258 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.506267 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:19.506274 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:19.506333 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:19.532063 3219848 cri.go:89] found id: ""
	I1217 12:05:19.532086 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.532095 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:19.532105 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:19.532165 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:19.562427 3219848 cri.go:89] found id: ""
	I1217 12:05:19.562450 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.562460 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:19.562466 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:19.562524 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:19.587869 3219848 cri.go:89] found id: ""
	I1217 12:05:19.587903 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.587912 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:19.587919 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:19.587990 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:19.612889 3219848 cri.go:89] found id: ""
	I1217 12:05:19.612916 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.612925 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:19.612932 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:19.612990 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:19.637949 3219848 cri.go:89] found id: ""
	I1217 12:05:19.637972 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.637980 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:19.637992 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:19.638053 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:19.666633 3219848 cri.go:89] found id: ""
	I1217 12:05:19.666703 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.666740 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:19.666769 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:19.666798 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:19.726394 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:19.726430 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:19.742581 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:19.742662 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:19.807145 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:19.798144    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.799463    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.800143    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.801047    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.802652    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:19.798144    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.799463    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.800143    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.801047    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.802652    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:19.807174 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:19.807187 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:19.832758 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:19.832792 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:22.366107 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:22.376592 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:22.376666 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:22.401822 3219848 cri.go:89] found id: ""
	I1217 12:05:22.401847 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.401857 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:22.401863 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:22.401921 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:22.425903 3219848 cri.go:89] found id: ""
	I1217 12:05:22.425927 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.425936 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:22.425943 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:22.426008 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:22.454459 3219848 cri.go:89] found id: ""
	I1217 12:05:22.454484 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.454493 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:22.454499 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:22.454559 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:22.479178 3219848 cri.go:89] found id: ""
	I1217 12:05:22.479202 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.479212 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:22.479219 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:22.479276 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:22.505859 3219848 cri.go:89] found id: ""
	I1217 12:05:22.505885 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.505900 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:22.505908 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:22.505995 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:22.531485 3219848 cri.go:89] found id: ""
	I1217 12:05:22.531506 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.531515 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:22.531523 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:22.531583 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:22.558267 3219848 cri.go:89] found id: ""
	I1217 12:05:22.558343 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.558360 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:22.558367 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:22.558427 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:22.588380 3219848 cri.go:89] found id: ""
	I1217 12:05:22.588431 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.588442 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:22.588451 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:22.588463 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:22.647590 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:22.647629 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:22.665568 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:22.665597 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:22.738273 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:22.729900    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.730477    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.732137    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.732564    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.734423    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:22.729900    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.730477    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.732137    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.732564    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.734423    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:22.738298 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:22.738310 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:22.764468 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:22.764503 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:25.296756 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:25.320288 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:25.320356 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:25.347938 3219848 cri.go:89] found id: ""
	I1217 12:05:25.347959 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.347967 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:25.347973 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:25.348030 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:25.375407 3219848 cri.go:89] found id: ""
	I1217 12:05:25.375428 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.375438 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:25.375444 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:25.375501 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:25.400165 3219848 cri.go:89] found id: ""
	I1217 12:05:25.400187 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.400195 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:25.400202 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:25.400266 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:25.428203 3219848 cri.go:89] found id: ""
	I1217 12:05:25.428229 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.428240 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:25.428247 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:25.428307 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:25.454651 3219848 cri.go:89] found id: ""
	I1217 12:05:25.454675 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.454685 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:25.454692 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:25.454754 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:25.478961 3219848 cri.go:89] found id: ""
	I1217 12:05:25.478987 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.478996 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:25.479003 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:25.479088 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:25.508637 3219848 cri.go:89] found id: ""
	I1217 12:05:25.508661 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.508670 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:25.508676 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:25.508782 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:25.534245 3219848 cri.go:89] found id: ""
	I1217 12:05:25.534270 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.534279 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:25.534289 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:25.534306 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:25.569632 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:25.569662 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:25.625748 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:25.625783 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:25.641383 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:25.641409 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:25.709135 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:25.700728    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.701299    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.703107    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.703541    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.705177    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:25.700728    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.701299    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.703107    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.703541    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.705177    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:25.709156 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:25.709168 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:28.233802 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:28.244795 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:28.244872 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:28.291390 3219848 cri.go:89] found id: ""
	I1217 12:05:28.291412 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.291421 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:28.291427 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:28.291488 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:28.355887 3219848 cri.go:89] found id: ""
	I1217 12:05:28.355909 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.355917 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:28.355924 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:28.355983 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:28.381610 3219848 cri.go:89] found id: ""
	I1217 12:05:28.381633 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.381641 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:28.381647 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:28.381707 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:28.407516 3219848 cri.go:89] found id: ""
	I1217 12:05:28.407544 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.407553 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:28.407560 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:28.407622 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:28.436914 3219848 cri.go:89] found id: ""
	I1217 12:05:28.436982 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.437006 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:28.437021 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:28.437098 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:28.461189 3219848 cri.go:89] found id: ""
	I1217 12:05:28.461258 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.461283 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:28.461298 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:28.461373 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:28.490913 3219848 cri.go:89] found id: ""
	I1217 12:05:28.490948 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.490958 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:28.490965 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:28.491033 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:28.521566 3219848 cri.go:89] found id: ""
	I1217 12:05:28.521589 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.521599 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:28.521610 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:28.521622 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:28.577123 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:28.577159 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:28.593088 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:28.593119 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:28.655447 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:28.646846   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.647657   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.649169   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.649707   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.651192   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:28.646846   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.647657   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.649169   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.649707   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.651192   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:28.655472 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:28.655484 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:28.680532 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:28.680566 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:31.213979 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:31.224716 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:31.224784 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:31.257050 3219848 cri.go:89] found id: ""
	I1217 12:05:31.257071 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.257079 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:31.257085 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:31.257141 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:31.315656 3219848 cri.go:89] found id: ""
	I1217 12:05:31.315677 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.315686 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:31.315692 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:31.315746 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:31.349340 3219848 cri.go:89] found id: ""
	I1217 12:05:31.349360 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.349369 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:31.349375 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:31.349432 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:31.374728 3219848 cri.go:89] found id: ""
	I1217 12:05:31.374755 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.374764 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:31.374771 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:31.374833 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:31.401386 3219848 cri.go:89] found id: ""
	I1217 12:05:31.401422 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.401432 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:31.401439 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:31.401511 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:31.427234 3219848 cri.go:89] found id: ""
	I1217 12:05:31.427260 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.427270 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:31.427277 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:31.427338 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:31.452628 3219848 cri.go:89] found id: ""
	I1217 12:05:31.452666 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.452676 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:31.452684 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:31.452756 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:31.476684 3219848 cri.go:89] found id: ""
	I1217 12:05:31.476717 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.476725 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:31.476735 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:31.476745 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:31.533895 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:31.533928 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:31.549405 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:31.549433 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:31.617988 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:31.609983   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.610706   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.612313   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.612915   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.613854   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:31.609983   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.610706   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.612313   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.612915   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.613854   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:31.618022 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:31.618051 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:31.643544 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:31.643575 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:34.173214 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:34.183798 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:34.183881 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:34.208274 3219848 cri.go:89] found id: ""
	I1217 12:05:34.208299 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.208309 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:34.208315 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:34.208377 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:34.232844 3219848 cri.go:89] found id: ""
	I1217 12:05:34.232870 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.232879 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:34.232886 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:34.232947 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:34.298630 3219848 cri.go:89] found id: ""
	I1217 12:05:34.298656 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.298665 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:34.298672 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:34.298732 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:34.352614 3219848 cri.go:89] found id: ""
	I1217 12:05:34.352657 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.352672 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:34.352679 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:34.352745 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:34.378134 3219848 cri.go:89] found id: ""
	I1217 12:05:34.378156 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.378165 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:34.378171 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:34.378234 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:34.402637 3219848 cri.go:89] found id: ""
	I1217 12:05:34.402660 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.402668 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:34.402675 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:34.402758 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:34.428834 3219848 cri.go:89] found id: ""
	I1217 12:05:34.428906 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.428941 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:34.428948 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:34.429006 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:34.459618 3219848 cri.go:89] found id: ""
	I1217 12:05:34.459641 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.459654 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:34.459663 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:34.459674 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:34.514834 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:34.514867 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:34.531691 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:34.531717 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:34.603404 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:34.594905   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.595653   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.597085   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.597664   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.599260   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:34.594905   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.595653   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.597085   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.597664   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.599260   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:34.603478 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:34.603498 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:34.629092 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:34.629131 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:37.158533 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:37.170305 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:37.170377 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:37.195895 3219848 cri.go:89] found id: ""
	I1217 12:05:37.195920 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.195929 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:37.195936 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:37.195994 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:37.221126 3219848 cri.go:89] found id: ""
	I1217 12:05:37.221153 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.221162 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:37.221170 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:37.221228 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:37.246560 3219848 cri.go:89] found id: ""
	I1217 12:05:37.246584 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.246593 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:37.246600 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:37.246663 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:37.289595 3219848 cri.go:89] found id: ""
	I1217 12:05:37.289620 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.289629 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:37.289635 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:37.289707 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:37.324903 3219848 cri.go:89] found id: ""
	I1217 12:05:37.324923 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.324932 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:37.324939 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:37.324997 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:37.361173 3219848 cri.go:89] found id: ""
	I1217 12:05:37.361194 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.361204 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:37.361210 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:37.361269 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:37.389438 3219848 cri.go:89] found id: ""
	I1217 12:05:37.389461 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.389470 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:37.389476 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:37.389537 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:37.414662 3219848 cri.go:89] found id: ""
	I1217 12:05:37.414700 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.414710 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:37.414719 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:37.414731 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:37.478614 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:37.471157   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.471613   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.473091   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.473470   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.474881   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:37.471157   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.471613   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.473091   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.473470   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.474881   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:37.478647 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:37.478661 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:37.504204 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:37.504241 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:37.535207 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:37.535282 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:37.594334 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:37.594382 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:40.110392 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:40.122282 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:40.122363 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:40.148146 3219848 cri.go:89] found id: ""
	I1217 12:05:40.148171 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.148180 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:40.148186 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:40.148248 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:40.175122 3219848 cri.go:89] found id: ""
	I1217 12:05:40.175149 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.175158 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:40.175164 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:40.175224 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:40.201606 3219848 cri.go:89] found id: ""
	I1217 12:05:40.201629 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.201638 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:40.201644 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:40.201702 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:40.227663 3219848 cri.go:89] found id: ""
	I1217 12:05:40.227688 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.227697 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:40.227704 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:40.227760 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:40.279855 3219848 cri.go:89] found id: ""
	I1217 12:05:40.279881 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.279889 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:40.279896 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:40.279955 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:40.341349 3219848 cri.go:89] found id: ""
	I1217 12:05:40.341372 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.341381 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:40.341388 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:40.341445 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:40.366250 3219848 cri.go:89] found id: ""
	I1217 12:05:40.366276 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.366285 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:40.366292 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:40.366374 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:40.390064 3219848 cri.go:89] found id: ""
	I1217 12:05:40.390091 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.390100 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:40.390112 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:40.390143 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:40.417840 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:40.417866 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:40.474223 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:40.474260 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:40.489995 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:40.490025 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:40.558792 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:40.550389   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.551049   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.552779   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.553286   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.554780   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:40.550389   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.551049   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.552779   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.553286   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.554780   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:40.558816 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:40.558829 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:43.085654 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:43.096719 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:43.096788 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:43.122755 3219848 cri.go:89] found id: ""
	I1217 12:05:43.122822 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.122846 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:43.122862 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:43.122942 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:43.149072 3219848 cri.go:89] found id: ""
	I1217 12:05:43.149097 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.149106 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:43.149113 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:43.149192 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:43.175863 3219848 cri.go:89] found id: ""
	I1217 12:05:43.175889 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.175897 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:43.175929 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:43.176015 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:43.202533 3219848 cri.go:89] found id: ""
	I1217 12:05:43.202572 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.202580 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:43.202587 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:43.202649 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:43.227233 3219848 cri.go:89] found id: ""
	I1217 12:05:43.227307 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.227331 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:43.227352 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:43.227449 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:43.293609 3219848 cri.go:89] found id: ""
	I1217 12:05:43.293677 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.293701 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:43.293723 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:43.293807 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:43.345463 3219848 cri.go:89] found id: ""
	I1217 12:05:43.345537 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.345563 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:43.345584 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:43.345692 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:43.376719 3219848 cri.go:89] found id: ""
	I1217 12:05:43.376754 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.376763 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:43.376772 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:43.376785 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:43.434376 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:43.434408 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:43.449996 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:43.450023 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:43.518159 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:43.509135   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.509732   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.511431   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.511909   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.513376   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:43.509135   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.509732   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.511431   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.511909   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.513376   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:43.518179 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:43.518193 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:43.544448 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:43.544487 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:46.079862 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:46.091017 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:46.091085 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:46.116886 3219848 cri.go:89] found id: ""
	I1217 12:05:46.116913 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.116924 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:46.116939 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:46.117008 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:46.142188 3219848 cri.go:89] found id: ""
	I1217 12:05:46.142216 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.142227 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:46.142234 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:46.142296 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:46.168033 3219848 cri.go:89] found id: ""
	I1217 12:05:46.168059 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.168068 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:46.168075 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:46.168141 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:46.194149 3219848 cri.go:89] found id: ""
	I1217 12:05:46.194178 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.194188 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:46.194197 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:46.194257 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:46.220319 3219848 cri.go:89] found id: ""
	I1217 12:05:46.220345 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.220354 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:46.220360 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:46.220456 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:46.246104 3219848 cri.go:89] found id: ""
	I1217 12:05:46.246131 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.246140 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:46.246147 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:46.246208 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:46.281496 3219848 cri.go:89] found id: ""
	I1217 12:05:46.281520 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.281528 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:46.281535 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:46.281597 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:46.327477 3219848 cri.go:89] found id: ""
	I1217 12:05:46.327558 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.327582 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:46.327625 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:46.327653 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:46.407413 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:46.407451 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:46.423419 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:46.423448 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:46.489920 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:46.481686   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.482194   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.483773   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.484191   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.485671   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:46.481686   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.482194   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.483773   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.484191   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.485671   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:46.489945 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:46.489959 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:46.516022 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:46.516061 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:49.045130 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:49.056135 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:49.056216 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:49.080547 3219848 cri.go:89] found id: ""
	I1217 12:05:49.080568 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.080577 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:49.080583 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:49.080645 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:49.106805 3219848 cri.go:89] found id: ""
	I1217 12:05:49.106834 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.106844 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:49.106850 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:49.106911 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:49.132478 3219848 cri.go:89] found id: ""
	I1217 12:05:49.132501 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.132509 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:49.132515 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:49.132579 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:49.159868 3219848 cri.go:89] found id: ""
	I1217 12:05:49.159896 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.159906 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:49.159912 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:49.159971 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:49.187789 3219848 cri.go:89] found id: ""
	I1217 12:05:49.187814 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.187835 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:49.187843 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:49.187902 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:49.213461 3219848 cri.go:89] found id: ""
	I1217 12:05:49.213489 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.213498 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:49.213505 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:49.213612 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:49.240191 3219848 cri.go:89] found id: ""
	I1217 12:05:49.240220 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.240229 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:49.240247 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:49.240343 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:49.295243 3219848 cri.go:89] found id: ""
	I1217 12:05:49.295291 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.295306 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:49.295319 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:49.295331 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:49.359872 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:49.359903 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:49.427963 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:49.428002 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:49.444788 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:49.444818 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:49.510631 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:49.502008   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.502410   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.504142   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.504867   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.506554   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:49.502008   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.502410   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.504142   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.504867   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.506554   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:49.510652 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:49.510663 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:52.036765 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:52.049010 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:52.049084 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:52.075472 3219848 cri.go:89] found id: ""
	I1217 12:05:52.075500 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.075510 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:52.075517 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:52.075582 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:52.105198 3219848 cri.go:89] found id: ""
	I1217 12:05:52.105222 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.105231 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:52.105238 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:52.105295 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:52.133404 3219848 cri.go:89] found id: ""
	I1217 12:05:52.133428 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.133439 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:52.133445 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:52.133507 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:52.158170 3219848 cri.go:89] found id: ""
	I1217 12:05:52.158195 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.158205 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:52.158212 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:52.158270 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:52.182679 3219848 cri.go:89] found id: ""
	I1217 12:05:52.182704 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.182713 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:52.182720 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:52.182778 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:52.211743 3219848 cri.go:89] found id: ""
	I1217 12:05:52.211769 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.211778 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:52.211785 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:52.211845 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:52.237892 3219848 cri.go:89] found id: ""
	I1217 12:05:52.237918 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.237927 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:52.237933 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:52.237990 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:52.281029 3219848 cri.go:89] found id: ""
	I1217 12:05:52.281055 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.281063 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:52.281073 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:52.281089 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:52.374683 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:52.374721 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:52.390831 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:52.390863 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:52.454058 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:52.444629   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.445414   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.447158   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.447874   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.449604   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:52.444629   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.445414   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.447158   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.447874   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.449604   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:52.454081 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:52.454095 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:52.479410 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:52.479443 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:55.007287 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:55.021703 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:55.021785 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:55.047053 3219848 cri.go:89] found id: ""
	I1217 12:05:55.047076 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.047085 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:55.047092 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:55.047149 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:55.074641 3219848 cri.go:89] found id: ""
	I1217 12:05:55.074665 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.074674 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:55.074680 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:55.074742 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:55.103484 3219848 cri.go:89] found id: ""
	I1217 12:05:55.103512 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.103521 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:55.103527 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:55.103586 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:55.132461 3219848 cri.go:89] found id: ""
	I1217 12:05:55.132487 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.132497 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:55.132503 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:55.132561 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:55.157595 3219848 cri.go:89] found id: ""
	I1217 12:05:55.157618 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.157626 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:55.157632 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:55.157694 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:55.187334 3219848 cri.go:89] found id: ""
	I1217 12:05:55.187354 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.187364 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:55.187371 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:55.187529 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:55.212469 3219848 cri.go:89] found id: ""
	I1217 12:05:55.212492 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.212501 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:55.212508 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:55.212567 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:55.238155 3219848 cri.go:89] found id: ""
	I1217 12:05:55.238188 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.238198 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:55.238208 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:55.238237 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:55.361507 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:55.352214   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.352982   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.354653   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.355179   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.356793   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:55.352214   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.352982   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.354653   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.355179   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.356793   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:55.361529 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:55.361542 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:55.387722 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:55.387760 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:55.415663 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:55.415688 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:55.471304 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:55.471342 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:57.988615 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:57.999088 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:57.999163 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:58.029910 3219848 cri.go:89] found id: ""
	I1217 12:05:58.029938 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.029948 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:58.029955 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:58.030021 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:58.056383 3219848 cri.go:89] found id: ""
	I1217 12:05:58.056409 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.056461 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:58.056468 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:58.056526 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:58.082442 3219848 cri.go:89] found id: ""
	I1217 12:05:58.082468 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.082477 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:58.082483 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:58.082543 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:58.110467 3219848 cri.go:89] found id: ""
	I1217 12:05:58.110491 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.110500 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:58.110507 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:58.110574 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:58.136852 3219848 cri.go:89] found id: ""
	I1217 12:05:58.136879 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.136888 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:58.136895 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:58.136976 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:58.163746 3219848 cri.go:89] found id: ""
	I1217 12:05:58.163772 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.163782 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:58.163788 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:58.163847 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:58.190425 3219848 cri.go:89] found id: ""
	I1217 12:05:58.190451 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.190460 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:58.190467 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:58.190529 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:58.220315 3219848 cri.go:89] found id: ""
	I1217 12:05:58.220338 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.220347 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:58.220358 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:58.220368 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:58.290204 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:58.290287 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:58.323039 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:58.323120 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:58.402482 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:58.393790   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.394615   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.396214   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.396884   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.398347   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:58.393790   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.394615   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.396214   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.396884   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.398347   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:58.402504 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:58.402521 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:58.428716 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:58.428754 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:00.959753 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:00.970910 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:00.970990 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:01.005870 3219848 cri.go:89] found id: ""
	I1217 12:06:01.005941 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.005958 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:01.005967 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:01.006031 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:01.034724 3219848 cri.go:89] found id: ""
	I1217 12:06:01.034747 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.034756 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:01.034765 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:01.034823 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:01.059798 3219848 cri.go:89] found id: ""
	I1217 12:06:01.059824 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.059836 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:01.059842 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:01.059900 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:01.089347 3219848 cri.go:89] found id: ""
	I1217 12:06:01.089370 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.089378 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:01.089385 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:01.089448 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:01.115166 3219848 cri.go:89] found id: ""
	I1217 12:06:01.115201 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.115211 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:01.115218 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:01.115286 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:01.142081 3219848 cri.go:89] found id: ""
	I1217 12:06:01.142109 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.142118 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:01.142125 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:01.142211 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:01.173172 3219848 cri.go:89] found id: ""
	I1217 12:06:01.173198 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.173208 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:01.173215 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:01.173280 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:01.200453 3219848 cri.go:89] found id: ""
	I1217 12:06:01.200477 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.200486 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:01.200496 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:01.200506 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:01.226189 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:01.226231 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:01.283020 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:01.283101 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:01.360095 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:01.360131 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:01.377017 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:01.377049 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:01.442041 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:01.434467   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.434821   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.436378   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.436785   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.438222   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:01.434467   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.434821   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.436378   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.436785   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.438222   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:03.943920 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:03.955271 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:03.955384 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:03.987078 3219848 cri.go:89] found id: ""
	I1217 12:06:03.987106 3219848 logs.go:282] 0 containers: []
	W1217 12:06:03.987115 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:03.987124 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:03.987185 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:04.020179 3219848 cri.go:89] found id: ""
	I1217 12:06:04.020207 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.020243 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:04.020250 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:04.020328 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:04.049457 3219848 cri.go:89] found id: ""
	I1217 12:06:04.049484 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.049494 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:04.049500 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:04.049565 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:04.077274 3219848 cri.go:89] found id: ""
	I1217 12:06:04.077302 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.077311 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:04.077318 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:04.077386 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:04.108697 3219848 cri.go:89] found id: ""
	I1217 12:06:04.108725 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.108734 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:04.108740 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:04.108800 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:04.133870 3219848 cri.go:89] found id: ""
	I1217 12:06:04.133949 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.133974 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:04.133988 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:04.134075 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:04.158589 3219848 cri.go:89] found id: ""
	I1217 12:06:04.158616 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.158625 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:04.158632 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:04.158705 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:04.182544 3219848 cri.go:89] found id: ""
	I1217 12:06:04.182568 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.182577 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:04.182605 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:04.182630 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:04.198694 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:04.198722 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:04.286551 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:04.273260   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.277107   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.278962   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.279268   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.280776   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:04.273260   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.277107   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.278962   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.279268   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.280776   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:04.286576 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:04.286587 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:04.322177 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:04.322211 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:04.362745 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:04.362774 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:06.922523 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:06.933191 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:06.933262 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:06.962648 3219848 cri.go:89] found id: ""
	I1217 12:06:06.962675 3219848 logs.go:282] 0 containers: []
	W1217 12:06:06.962685 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:06.962692 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:06.962750 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:06.991732 3219848 cri.go:89] found id: ""
	I1217 12:06:06.991757 3219848 logs.go:282] 0 containers: []
	W1217 12:06:06.991765 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:06.991772 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:06.991829 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:07.018557 3219848 cri.go:89] found id: ""
	I1217 12:06:07.018584 3219848 logs.go:282] 0 containers: []
	W1217 12:06:07.018594 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:07.018600 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:07.018659 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:07.044679 3219848 cri.go:89] found id: ""
	I1217 12:06:07.044704 3219848 logs.go:282] 0 containers: []
	W1217 12:06:07.044713 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:07.044720 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:07.044786 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:07.073836 3219848 cri.go:89] found id: ""
	I1217 12:06:07.073905 3219848 logs.go:282] 0 containers: []
	W1217 12:06:07.073930 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:07.073944 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:07.074020 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:07.100945 3219848 cri.go:89] found id: ""
	I1217 12:06:07.100972 3219848 logs.go:282] 0 containers: []
	W1217 12:06:07.100982 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:07.100989 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:07.101094 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:07.125935 3219848 cri.go:89] found id: ""
	I1217 12:06:07.125963 3219848 logs.go:282] 0 containers: []
	W1217 12:06:07.125972 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:07.125978 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:07.126061 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:07.151599 3219848 cri.go:89] found id: ""
	I1217 12:06:07.151624 3219848 logs.go:282] 0 containers: []
	W1217 12:06:07.151633 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:07.151641 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:07.151653 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:07.167414 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:07.167439 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:07.235174 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:07.226345   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.226997   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.228627   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.229347   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.231069   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:07.226345   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.226997   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.228627   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.229347   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.231069   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:07.235246 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:07.235266 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:07.264720 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:07.264754 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:07.349181 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:07.349210 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:09.906484 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:09.917044 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:09.917120 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:09.941939 3219848 cri.go:89] found id: ""
	I1217 12:06:09.942004 3219848 logs.go:282] 0 containers: []
	W1217 12:06:09.942024 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:09.942031 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:09.942088 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:09.966481 3219848 cri.go:89] found id: ""
	I1217 12:06:09.966507 3219848 logs.go:282] 0 containers: []
	W1217 12:06:09.966515 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:09.966523 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:09.966622 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:09.991806 3219848 cri.go:89] found id: ""
	I1217 12:06:09.991830 3219848 logs.go:282] 0 containers: []
	W1217 12:06:09.991839 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:09.991845 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:09.991901 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:10.027713 3219848 cri.go:89] found id: ""
	I1217 12:06:10.027784 3219848 logs.go:282] 0 containers: []
	W1217 12:06:10.027800 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:10.027808 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:10.027874 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:10.060097 3219848 cri.go:89] found id: ""
	I1217 12:06:10.060124 3219848 logs.go:282] 0 containers: []
	W1217 12:06:10.060133 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:10.060140 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:10.060203 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:10.091977 3219848 cri.go:89] found id: ""
	I1217 12:06:10.092002 3219848 logs.go:282] 0 containers: []
	W1217 12:06:10.092010 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:10.092018 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:10.092081 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:10.118481 3219848 cri.go:89] found id: ""
	I1217 12:06:10.118504 3219848 logs.go:282] 0 containers: []
	W1217 12:06:10.118513 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:10.118526 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:10.118586 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:10.145196 3219848 cri.go:89] found id: ""
	I1217 12:06:10.145263 3219848 logs.go:282] 0 containers: []
	W1217 12:06:10.145278 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:10.145288 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:10.145306 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:10.161573 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:10.161603 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:10.227235 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:10.218460   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.219270   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.220964   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.221573   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.223258   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:10.218460   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.219270   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.220964   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.221573   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.223258   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:10.227259 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:10.227273 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:10.253333 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:10.253644 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:10.302209 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:10.302284 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:12.881891 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:12.892449 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:12.892519 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:12.919824 3219848 cri.go:89] found id: ""
	I1217 12:06:12.919848 3219848 logs.go:282] 0 containers: []
	W1217 12:06:12.919856 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:12.919863 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:12.919924 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:12.946684 3219848 cri.go:89] found id: ""
	I1217 12:06:12.946711 3219848 logs.go:282] 0 containers: []
	W1217 12:06:12.946721 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:12.946728 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:12.946808 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:12.970796 3219848 cri.go:89] found id: ""
	I1217 12:06:12.970820 3219848 logs.go:282] 0 containers: []
	W1217 12:06:12.970830 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:12.970837 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:12.970904 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:12.996393 3219848 cri.go:89] found id: ""
	I1217 12:06:12.996459 3219848 logs.go:282] 0 containers: []
	W1217 12:06:12.996469 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:12.996476 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:12.996538 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:13.022560 3219848 cri.go:89] found id: ""
	I1217 12:06:13.022587 3219848 logs.go:282] 0 containers: []
	W1217 12:06:13.022596 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:13.022603 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:13.022664 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:13.050809 3219848 cri.go:89] found id: ""
	I1217 12:06:13.050839 3219848 logs.go:282] 0 containers: []
	W1217 12:06:13.050849 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:13.050856 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:13.050919 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:13.077432 3219848 cri.go:89] found id: ""
	I1217 12:06:13.077460 3219848 logs.go:282] 0 containers: []
	W1217 12:06:13.077469 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:13.077477 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:13.077540 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:13.104029 3219848 cri.go:89] found id: ""
	I1217 12:06:13.104056 3219848 logs.go:282] 0 containers: []
	W1217 12:06:13.104065 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:13.104075 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:13.104086 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:13.162000 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:13.162038 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:13.177865 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:13.177891 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:13.241266 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:13.232767   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.233565   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.235109   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.235417   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.236871   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:13.232767   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.233565   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.235109   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.235417   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.236871   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:13.241289 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:13.241302 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:13.271232 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:13.271269 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:15.839567 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:15.850326 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:15.850396 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:15.875471 3219848 cri.go:89] found id: ""
	I1217 12:06:15.875493 3219848 logs.go:282] 0 containers: []
	W1217 12:06:15.875502 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:15.875509 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:15.875566 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:15.899977 3219848 cri.go:89] found id: ""
	I1217 12:06:15.899998 3219848 logs.go:282] 0 containers: []
	W1217 12:06:15.900007 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:15.900013 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:15.900073 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:15.926093 3219848 cri.go:89] found id: ""
	I1217 12:06:15.926117 3219848 logs.go:282] 0 containers: []
	W1217 12:06:15.926126 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:15.926133 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:15.926193 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:15.951373 3219848 cri.go:89] found id: ""
	I1217 12:06:15.951397 3219848 logs.go:282] 0 containers: []
	W1217 12:06:15.951407 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:15.951414 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:15.951470 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:15.976937 3219848 cri.go:89] found id: ""
	I1217 12:06:15.976963 3219848 logs.go:282] 0 containers: []
	W1217 12:06:15.976972 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:15.976979 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:15.977041 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:16.003518 3219848 cri.go:89] found id: ""
	I1217 12:06:16.003717 3219848 logs.go:282] 0 containers: []
	W1217 12:06:16.003750 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:16.003786 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:16.003901 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:16.032115 3219848 cri.go:89] found id: ""
	I1217 12:06:16.032142 3219848 logs.go:282] 0 containers: []
	W1217 12:06:16.032151 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:16.032159 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:16.032219 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:16.061490 3219848 cri.go:89] found id: ""
	I1217 12:06:16.061517 3219848 logs.go:282] 0 containers: []
	W1217 12:06:16.061526 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:16.061536 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:16.061547 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:16.077146 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:16.077179 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:16.145955 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:16.137946   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.138559   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.140379   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.140910   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.142053   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:16.137946   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.138559   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.140379   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.140910   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.142053   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:16.145981 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:16.145995 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:16.172145 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:16.172180 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:16.206805 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:16.206833 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:18.766689 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:18.777034 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:18.777108 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:18.805815 3219848 cri.go:89] found id: ""
	I1217 12:06:18.805838 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.805847 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:18.805853 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:18.805910 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:18.831468 3219848 cri.go:89] found id: ""
	I1217 12:06:18.831492 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.831501 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:18.831508 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:18.831567 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:18.859309 3219848 cri.go:89] found id: ""
	I1217 12:06:18.859339 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.859349 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:18.859368 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:18.859436 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:18.884524 3219848 cri.go:89] found id: ""
	I1217 12:06:18.884552 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.884561 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:18.884569 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:18.884665 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:18.909522 3219848 cri.go:89] found id: ""
	I1217 12:06:18.909545 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.909554 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:18.909561 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:18.909620 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:18.935126 3219848 cri.go:89] found id: ""
	I1217 12:06:18.935151 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.935161 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:18.935167 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:18.935227 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:18.964480 3219848 cri.go:89] found id: ""
	I1217 12:06:18.964506 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.964516 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:18.964522 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:18.964581 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:18.990408 3219848 cri.go:89] found id: ""
	I1217 12:06:18.990435 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.990444 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:18.990454 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:18.990466 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:19.017937 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:19.017974 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:19.048976 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:19.049004 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:19.108146 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:19.108184 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:19.125457 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:19.125507 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:19.190960 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:19.182754   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.183274   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.185018   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.185416   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.186923   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:19.182754   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.183274   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.185018   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.185416   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.186923   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:21.691321 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:21.702288 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:21.702373 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:21.728533 3219848 cri.go:89] found id: ""
	I1217 12:06:21.728561 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.728571 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:21.728577 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:21.728645 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:21.755298 3219848 cri.go:89] found id: ""
	I1217 12:06:21.755323 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.755333 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:21.755345 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:21.755403 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:21.784470 3219848 cri.go:89] found id: ""
	I1217 12:06:21.784494 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.784503 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:21.784509 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:21.784568 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:21.811503 3219848 cri.go:89] found id: ""
	I1217 12:06:21.811528 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.811538 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:21.811544 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:21.811602 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:21.841147 3219848 cri.go:89] found id: ""
	I1217 12:06:21.841212 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.841227 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:21.841241 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:21.841303 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:21.867736 3219848 cri.go:89] found id: ""
	I1217 12:06:21.867763 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.867773 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:21.867779 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:21.867847 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:21.897039 3219848 cri.go:89] found id: ""
	I1217 12:06:21.897104 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.897121 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:21.897128 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:21.897187 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:21.922398 3219848 cri.go:89] found id: ""
	I1217 12:06:21.922420 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.922429 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:21.922438 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:21.922449 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:21.980203 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:21.980241 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:21.996482 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:21.996513 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:22.074426 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:22.061574   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.062326   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.064118   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.064738   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.070487   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:22.061574   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.062326   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.064118   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.064738   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.070487   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:22.074474 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:22.074488 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:22.101174 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:22.101210 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:24.630003 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:24.640702 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:24.640773 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:24.666366 3219848 cri.go:89] found id: ""
	I1217 12:06:24.666390 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.666399 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:24.666408 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:24.666465 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:24.693372 3219848 cri.go:89] found id: ""
	I1217 12:06:24.693398 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.693407 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:24.693413 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:24.693478 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:24.723159 3219848 cri.go:89] found id: ""
	I1217 12:06:24.723181 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.723190 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:24.723197 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:24.723264 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:24.747933 3219848 cri.go:89] found id: ""
	I1217 12:06:24.747960 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.747969 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:24.747976 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:24.748044 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:24.774083 3219848 cri.go:89] found id: ""
	I1217 12:06:24.774105 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.774113 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:24.774120 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:24.774186 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:24.808050 3219848 cri.go:89] found id: ""
	I1217 12:06:24.808076 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.808085 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:24.808092 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:24.808200 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:24.833993 3219848 cri.go:89] found id: ""
	I1217 12:06:24.834070 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.834085 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:24.834093 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:24.834153 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:24.860654 3219848 cri.go:89] found id: ""
	I1217 12:06:24.860679 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.860688 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:24.860697 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:24.860708 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:24.917182 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:24.917265 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:24.933462 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:24.933491 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:25.002903 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:24.992978   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.993789   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.995410   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.996068   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.997870   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:24.992978   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.993789   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.995410   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.996068   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.997870   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:25.002927 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:25.002960 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:25.031774 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:25.031809 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:27.560620 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:27.575695 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:27.575766 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:27.603395 3219848 cri.go:89] found id: ""
	I1217 12:06:27.603421 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.603430 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:27.603436 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:27.603498 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:27.628716 3219848 cri.go:89] found id: ""
	I1217 12:06:27.628739 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.628747 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:27.628754 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:27.628810 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:27.653566 3219848 cri.go:89] found id: ""
	I1217 12:06:27.653629 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.653653 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:27.653679 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:27.653756 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:27.679125 3219848 cri.go:89] found id: ""
	I1217 12:06:27.679150 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.679159 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:27.679166 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:27.679245 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:27.705566 3219848 cri.go:89] found id: ""
	I1217 12:06:27.705632 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.705656 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:27.705677 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:27.705762 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:27.730473 3219848 cri.go:89] found id: ""
	I1217 12:06:27.730541 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.730556 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:27.730564 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:27.730639 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:27.755451 3219848 cri.go:89] found id: ""
	I1217 12:06:27.755476 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.755485 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:27.755492 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:27.755552 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:27.783637 3219848 cri.go:89] found id: ""
	I1217 12:06:27.783663 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.783673 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:27.783682 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:27.783693 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:27.815668 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:27.815707 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:27.846761 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:27.846788 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:27.903961 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:27.903992 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:27.920251 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:27.920285 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:27.989986 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:27.982512   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.983471   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.984453   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.984910   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.985983   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:27.982512   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.983471   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.984453   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.984910   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.985983   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:30.490267 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:30.501854 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:30.501936 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:30.577313 3219848 cri.go:89] found id: ""
	I1217 12:06:30.577342 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.577352 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:30.577376 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:30.577460 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:30.606634 3219848 cri.go:89] found id: ""
	I1217 12:06:30.606660 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.606670 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:30.606676 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:30.606744 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:30.632310 3219848 cri.go:89] found id: ""
	I1217 12:06:30.632342 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.632351 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:30.632358 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:30.632473 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:30.658929 3219848 cri.go:89] found id: ""
	I1217 12:06:30.658960 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.658970 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:30.658976 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:30.659036 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:30.690494 3219848 cri.go:89] found id: ""
	I1217 12:06:30.690519 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.690529 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:30.690535 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:30.690598 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:30.716270 3219848 cri.go:89] found id: ""
	I1217 12:06:30.716295 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.716305 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:30.716312 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:30.716396 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:30.743684 3219848 cri.go:89] found id: ""
	I1217 12:06:30.743720 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.743738 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:30.743745 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:30.743823 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:30.771862 3219848 cri.go:89] found id: ""
	I1217 12:06:30.771895 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.771905 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:30.771915 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:30.771928 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:30.829962 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:30.829997 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:30.846244 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:30.846269 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:30.910789 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:30.902355   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.903184   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.904920   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.905376   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.906932   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:30.902355   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.903184   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.904920   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.905376   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.906932   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:30.910812 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:30.910825 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:30.937515 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:30.937552 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:33.467661 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:33.479263 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:33.479335 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:33.531382 3219848 cri.go:89] found id: ""
	I1217 12:06:33.531405 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.531414 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:33.531420 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:33.531491 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:33.586607 3219848 cri.go:89] found id: ""
	I1217 12:06:33.586628 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.586637 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:33.586651 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:33.586708 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:33.622903 3219848 cri.go:89] found id: ""
	I1217 12:06:33.622925 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.622934 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:33.622940 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:33.623012 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:33.652846 3219848 cri.go:89] found id: ""
	I1217 12:06:33.652874 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.652882 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:33.652889 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:33.652946 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:33.677852 3219848 cri.go:89] found id: ""
	I1217 12:06:33.677877 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.677886 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:33.677893 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:33.677972 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:33.706815 3219848 cri.go:89] found id: ""
	I1217 12:06:33.706840 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.706849 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:33.706856 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:33.706918 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:33.736780 3219848 cri.go:89] found id: ""
	I1217 12:06:33.736806 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.736816 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:33.736822 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:33.736880 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:33.761376 3219848 cri.go:89] found id: ""
	I1217 12:06:33.761414 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.761424 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:33.761433 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:33.761445 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:33.819076 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:33.819113 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:33.835282 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:33.835311 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:33.903109 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:33.894518   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.895131   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.896856   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.897422   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.899092   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:33.894518   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.895131   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.896856   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.897422   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.899092   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:33.903181 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:33.903206 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:33.935593 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:33.935636 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:36.469816 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:36.480311 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:36.480394 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:36.522000 3219848 cri.go:89] found id: ""
	I1217 12:06:36.522026 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.522035 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:36.522041 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:36.522098 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:36.584783 3219848 cri.go:89] found id: ""
	I1217 12:06:36.584811 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.584819 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:36.584825 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:36.584885 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:36.614443 3219848 cri.go:89] found id: ""
	I1217 12:06:36.614469 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.614478 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:36.614484 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:36.614543 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:36.642952 3219848 cri.go:89] found id: ""
	I1217 12:06:36.642974 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.642982 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:36.642989 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:36.643047 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:36.667989 3219848 cri.go:89] found id: ""
	I1217 12:06:36.668011 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.668019 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:36.668025 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:36.668109 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:36.696974 3219848 cri.go:89] found id: ""
	I1217 12:06:36.697049 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.697062 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:36.697096 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:36.697191 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:36.723789 3219848 cri.go:89] found id: ""
	I1217 12:06:36.723812 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.723821 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:36.723828 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:36.723885 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:36.748007 3219848 cri.go:89] found id: ""
	I1217 12:06:36.748078 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.748102 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:36.748126 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:36.748167 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:36.778526 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:36.778554 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:36.834614 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:36.834648 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:36.852247 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:36.852276 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:36.920099 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:36.911022   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.911723   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.913310   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.913643   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.915127   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:36.911022   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.911723   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.913310   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.913643   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.915127   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:36.920123 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:36.920135 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:39.447091 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:39.457670 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:39.457740 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:39.481236 3219848 cri.go:89] found id: ""
	I1217 12:06:39.481260 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.481269 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:39.481276 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:39.481333 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:39.539773 3219848 cri.go:89] found id: ""
	I1217 12:06:39.539800 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.539810 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:39.539817 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:39.539879 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:39.586024 3219848 cri.go:89] found id: ""
	I1217 12:06:39.586053 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.586069 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:39.586075 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:39.586133 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:39.614247 3219848 cri.go:89] found id: ""
	I1217 12:06:39.614272 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.614281 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:39.614288 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:39.614348 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:39.639817 3219848 cri.go:89] found id: ""
	I1217 12:06:39.639840 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.639848 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:39.639855 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:39.639910 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:39.663356 3219848 cri.go:89] found id: ""
	I1217 12:06:39.663382 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.663390 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:39.663397 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:39.663457 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:39.692611 3219848 cri.go:89] found id: ""
	I1217 12:06:39.692638 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.692647 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:39.692654 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:39.692714 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:39.718640 3219848 cri.go:89] found id: ""
	I1217 12:06:39.718665 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.718674 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:39.718686 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:39.718698 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:39.743735 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:39.743776 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:39.776101 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:39.776130 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:39.839871 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:39.839912 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:39.856925 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:39.856956 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:39.927715 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:39.920216   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.920790   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.921971   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.922477   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.923988   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:39.920216   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.920790   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.921971   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.922477   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.923988   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:42.428378 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:42.439785 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:42.439861 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:42.467826 3219848 cri.go:89] found id: ""
	I1217 12:06:42.467849 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.467857 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:42.467864 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:42.467928 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:42.492505 3219848 cri.go:89] found id: ""
	I1217 12:06:42.492533 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.492542 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:42.492549 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:42.492607 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:42.562039 3219848 cri.go:89] found id: ""
	I1217 12:06:42.562062 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.562071 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:42.562077 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:42.562147 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:42.600111 3219848 cri.go:89] found id: ""
	I1217 12:06:42.600139 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.600148 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:42.600155 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:42.600218 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:42.631003 3219848 cri.go:89] found id: ""
	I1217 12:06:42.631026 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.631035 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:42.631042 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:42.631101 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:42.655257 3219848 cri.go:89] found id: ""
	I1217 12:06:42.655283 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.655292 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:42.655305 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:42.655366 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:42.681199 3219848 cri.go:89] found id: ""
	I1217 12:06:42.681220 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.681229 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:42.681236 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:42.681295 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:42.706511 3219848 cri.go:89] found id: ""
	I1217 12:06:42.706535 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.706544 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:42.706553 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:42.706565 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:42.762839 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:42.762875 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:42.779904 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:42.779936 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:42.849079 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:42.840586   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.841187   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.842724   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.843182   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.844615   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:42.840586   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.841187   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.842724   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.843182   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.844615   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:42.849103 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:42.849114 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:42.874488 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:42.874529 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:45.406478 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:45.417919 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:45.417989 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:45.445578 3219848 cri.go:89] found id: ""
	I1217 12:06:45.445614 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.445624 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:45.445632 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:45.445694 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:45.477590 3219848 cri.go:89] found id: ""
	I1217 12:06:45.477674 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.477699 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:45.477735 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:45.477831 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:45.515743 3219848 cri.go:89] found id: ""
	I1217 12:06:45.515765 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.515774 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:45.515781 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:45.515840 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:45.550588 3219848 cri.go:89] found id: ""
	I1217 12:06:45.550610 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.550619 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:45.550626 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:45.550684 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:45.595764 3219848 cri.go:89] found id: ""
	I1217 12:06:45.595785 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.595794 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:45.595802 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:45.595862 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:45.621971 3219848 cri.go:89] found id: ""
	I1217 12:06:45.621994 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.622003 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:45.622010 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:45.622077 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:45.648142 3219848 cri.go:89] found id: ""
	I1217 12:06:45.648176 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.648186 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:45.648193 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:45.648266 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:45.677328 3219848 cri.go:89] found id: ""
	I1217 12:06:45.677364 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.677373 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:45.677383 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:45.677401 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:45.750976 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:45.739342   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.739999   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.744563   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.745136   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.746629   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:45.739342   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.739999   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.744563   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.745136   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.746629   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:45.750998 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:45.751012 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:45.777019 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:45.777056 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:45.805927 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:45.805957 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:45.861380 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:45.861414 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:48.377400 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:48.388086 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:48.388158 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:48.412282 3219848 cri.go:89] found id: ""
	I1217 12:06:48.412305 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.412313 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:48.412320 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:48.412377 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:48.437811 3219848 cri.go:89] found id: ""
	I1217 12:06:48.437846 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.437856 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:48.437879 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:48.437953 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:48.462517 3219848 cri.go:89] found id: ""
	I1217 12:06:48.462539 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.462547 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:48.462557 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:48.462615 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:48.486379 3219848 cri.go:89] found id: ""
	I1217 12:06:48.486402 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.486411 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:48.486418 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:48.486475 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:48.582544 3219848 cri.go:89] found id: ""
	I1217 12:06:48.582569 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.582578 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:48.582585 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:48.582691 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:48.612954 3219848 cri.go:89] found id: ""
	I1217 12:06:48.612980 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.612990 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:48.612997 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:48.613058 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:48.638059 3219848 cri.go:89] found id: ""
	I1217 12:06:48.638083 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.638091 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:48.638098 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:48.638160 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:48.663252 3219848 cri.go:89] found id: ""
	I1217 12:06:48.663278 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.663288 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:48.663298 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:48.663308 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:48.719388 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:48.719422 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:48.735198 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:48.735227 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:48.801972 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:48.793731   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.794319   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.795999   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.796684   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.798278   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:48.793731   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.794319   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.795999   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.796684   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.798278   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:48.801995 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:48.802008 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:48.827753 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:48.827787 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:51.362888 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:51.373695 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:51.373779 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:51.399521 3219848 cri.go:89] found id: ""
	I1217 12:06:51.399547 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.399556 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:51.399563 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:51.399620 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:51.425074 3219848 cri.go:89] found id: ""
	I1217 12:06:51.425140 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.425154 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:51.425161 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:51.425219 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:51.449708 3219848 cri.go:89] found id: ""
	I1217 12:06:51.449731 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.449740 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:51.449746 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:51.449818 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:51.478561 3219848 cri.go:89] found id: ""
	I1217 12:06:51.478585 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.478594 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:51.478601 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:51.478687 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:51.520104 3219848 cri.go:89] found id: ""
	I1217 12:06:51.520142 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.520152 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:51.520159 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:51.520227 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:51.589783 3219848 cri.go:89] found id: ""
	I1217 12:06:51.589826 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.589836 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:51.589843 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:51.589914 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:51.616852 3219848 cri.go:89] found id: ""
	I1217 12:06:51.616888 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.616898 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:51.616904 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:51.616967 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:51.643529 3219848 cri.go:89] found id: ""
	I1217 12:06:51.643609 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.643632 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:51.643661 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:51.643706 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:51.707671 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:51.699393   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.700178   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.701673   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.702158   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.703665   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:51.699393   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.700178   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.701673   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.702158   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.703665   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:51.707744 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:51.707772 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:51.733586 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:51.733622 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:51.763883 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:51.763912 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:51.818754 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:51.818788 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:54.336140 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:54.350294 3219848 out.go:203] 
	W1217 12:06:54.353246 3219848 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1217 12:06:54.353303 3219848 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1217 12:06:54.353317 3219848 out.go:285] * Related issues:
	W1217 12:06:54.353339 3219848 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1217 12:06:54.353354 3219848 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1217 12:06:54.356285 3219848 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.201958753Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.201978051Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202016040Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202033845Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202043395Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202054242Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202063719Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202075034Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202091206Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202122163Z" level=info msg="Connect containerd service"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202376764Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202915340Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.221759735Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.221831644Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.221883507Z" level=info msg="Start subscribing containerd event"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.221927577Z" level=info msg="Start recovering state"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.261428629Z" level=info msg="Start event monitor"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.261488361Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.261499979Z" level=info msg="Start streaming server"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.261510449Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.261519753Z" level=info msg="runtime interface starting up..."
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.261526965Z" level=info msg="starting plugins..."
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.261557275Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.261851842Z" level=info msg="containerd successfully booted in 0.083557s"
	Dec 17 12:00:50 newest-cni-669680 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:07:03.949783   13784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:07:03.950486   13784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:07:03.952111   13784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:07:03.952680   13784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:07:03.954245   13784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +26.532481] overlayfs: idmapped layers are currently not supported
	[Dec17 09:26] overlayfs: idmapped layers are currently not supported
	[Dec17 09:27] overlayfs: idmapped layers are currently not supported
	[Dec17 09:29] overlayfs: idmapped layers are currently not supported
	[Dec17 09:31] overlayfs: idmapped layers are currently not supported
	[Dec17 09:41] overlayfs: idmapped layers are currently not supported
	[Dec17 09:43] overlayfs: idmapped layers are currently not supported
	[Dec17 09:44] overlayfs: idmapped layers are currently not supported
	[  +5.066669] overlayfs: idmapped layers are currently not supported
	[ +38.827173] overlayfs: idmapped layers are currently not supported
	[Dec17 09:45] overlayfs: idmapped layers are currently not supported
	[Dec17 09:46] overlayfs: idmapped layers are currently not supported
	[Dec17 09:48] overlayfs: idmapped layers are currently not supported
	[  +5.468161] overlayfs: idmapped layers are currently not supported
	[Dec17 09:49] overlayfs: idmapped layers are currently not supported
	[  +4.263444] overlayfs: idmapped layers are currently not supported
	[Dec17 09:50] overlayfs: idmapped layers are currently not supported
	[Dec17 10:07] overlayfs: idmapped layers are currently not supported
	[Dec17 10:08] overlayfs: idmapped layers are currently not supported
	[Dec17 10:10] overlayfs: idmapped layers are currently not supported
	[Dec17 10:11] overlayfs: idmapped layers are currently not supported
	[Dec17 10:13] overlayfs: idmapped layers are currently not supported
	[Dec17 10:15] overlayfs: idmapped layers are currently not supported
	[Dec17 10:16] overlayfs: idmapped layers are currently not supported
	[Dec17 10:21] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 12:07:04 up 17:49,  0 user,  load average: 1.47, 0.90, 1.16
	Linux newest-cni-669680 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 12:07:00 newest-cni-669680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:07:00 newest-cni-669680 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:07:00 newest-cni-669680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:07:01 newest-cni-669680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:07:01 newest-cni-669680 kubelet[13626]: E1217 12:07:01.569940   13626 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:07:01 newest-cni-669680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:07:01 newest-cni-669680 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:07:02 newest-cni-669680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 17 12:07:02 newest-cni-669680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:07:02 newest-cni-669680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:07:02 newest-cni-669680 kubelet[13664]: E1217 12:07:02.318179   13664 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:07:02 newest-cni-669680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:07:02 newest-cni-669680 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:07:02 newest-cni-669680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	Dec 17 12:07:02 newest-cni-669680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:07:02 newest-cni-669680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:07:03 newest-cni-669680 kubelet[13685]: E1217 12:07:03.065879   13685 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:07:03 newest-cni-669680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:07:03 newest-cni-669680 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:07:03 newest-cni-669680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
	Dec 17 12:07:03 newest-cni-669680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:07:03 newest-cni-669680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:07:03 newest-cni-669680 kubelet[13747]: E1217 12:07:03.815700   13747 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:07:03 newest-cni-669680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:07:03 newest-cni-669680 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-669680 -n newest-cni-669680
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-669680 -n newest-cni-669680: exit status 2 (359.934627ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-669680" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-669680
helpers_test.go:244: (dbg) docker inspect newest-cni-669680:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc",
	        "Created": "2025-12-17T11:50:38.904543162Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3219980,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T12:00:44.656180291Z",
	            "FinishedAt": "2025-12-17T12:00:43.27484179Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc/hostname",
	        "HostsPath": "/var/lib/docker/containers/23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc/hosts",
	        "LogPath": "/var/lib/docker/containers/23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc/23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc-json.log",
	        "Name": "/newest-cni-669680",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-669680:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-669680",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "23474ef32ddb57cce3e9ce06678ea189a36f3b5882da93beba6f8986ba4388fc",
	                "LowerDir": "/var/lib/docker/overlay2/0a323466537bd2b84d1c1de1e658b41a849cb439639dbed3e328ce82ab171fd3-init/diff:/var/lib/docker/overlay2/aa1c3cb837db05afa9c265c464cc269fa9c11658f422c1c8858e1287ac952f12/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0a323466537bd2b84d1c1de1e658b41a849cb439639dbed3e328ce82ab171fd3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0a323466537bd2b84d1c1de1e658b41a849cb439639dbed3e328ce82ab171fd3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0a323466537bd2b84d1c1de1e658b41a849cb439639dbed3e328ce82ab171fd3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-669680",
	                "Source": "/var/lib/docker/volumes/newest-cni-669680/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-669680",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-669680",
	                "name.minikube.sigs.k8s.io": "newest-cni-669680",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9f695758c865267c895635ea7898bf1b9d81e4dd5864219138eceead759e9a1b",
	            "SandboxKey": "/var/run/docker/netns/9f695758c865",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-669680": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:62:0f:03:13:0e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e84740d61c89f51b13c32d88b9c5aafc9e8e1ba5e275e3db72c9a38077e44a94",
	                    "EndpointID": "b90d44188d07afa11a62007f533d5391259eb969677e3f00be6723f39985284a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-669680",
	                        "23474ef32ddb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-669680 -n newest-cni-669680
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-669680 -n newest-cni-669680: exit status 2 (321.044578ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-669680 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-669680 logs -n 25: (1.590255211s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ delete  │ -p embed-certs-628462                                                                                                                                                                                                                                    │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p embed-certs-628462                                                                                                                                                                                                                                    │ embed-certs-628462           │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ delete  │ -p disable-driver-mounts-003095                                                                                                                                                                                                                          │ disable-driver-mounts-003095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:48 UTC │
	│ start   │ -p default-k8s-diff-port-224095 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:48 UTC │ 17 Dec 25 11:49 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-224095 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ stop    │ -p default-k8s-diff-port-224095 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-224095 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:49 UTC │
	│ start   │ -p default-k8s-diff-port-224095 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:49 UTC │ 17 Dec 25 11:50 UTC │
	│ image   │ default-k8s-diff-port-224095 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ pause   │ -p default-k8s-diff-port-224095 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ unpause │ -p default-k8s-diff-port-224095 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ delete  │ -p default-k8s-diff-port-224095                                                                                                                                                                                                                          │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ delete  │ -p default-k8s-diff-port-224095                                                                                                                                                                                                                          │ default-k8s-diff-port-224095 │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │ 17 Dec 25 11:50 UTC │
	│ start   │ -p newest-cni-669680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 11:50 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-118262 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:53 UTC │                     │
	│ stop    │ -p no-preload-118262 --alsologtostderr -v=3                                                                                                                                                                                                              │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ addons  │ enable dashboard -p no-preload-118262 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │ 17 Dec 25 11:55 UTC │
	│ start   │ -p no-preload-118262 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-118262            │ jenkins │ v1.37.0 │ 17 Dec 25 11:55 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-669680 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 11:58 UTC │                     │
	│ stop    │ -p newest-cni-669680 --alsologtostderr -v=3                                                                                                                                                                                                              │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 12:00 UTC │ 17 Dec 25 12:00 UTC │
	│ addons  │ enable dashboard -p newest-cni-669680 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 12:00 UTC │ 17 Dec 25 12:00 UTC │
	│ start   │ -p newest-cni-669680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 12:00 UTC │                     │
	│ image   │ newest-cni-669680 image list --format=json                                                                                                                                                                                                               │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 12:06 UTC │ 17 Dec 25 12:06 UTC │
	│ pause   │ -p newest-cni-669680 --alsologtostderr -v=1                                                                                                                                                                                                              │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 12:06 UTC │ 17 Dec 25 12:07 UTC │
	│ unpause │ -p newest-cni-669680 --alsologtostderr -v=1                                                                                                                                                                                                              │ newest-cni-669680            │ jenkins │ v1.37.0 │ 17 Dec 25 12:07 UTC │ 17 Dec 25 12:07 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 12:00:44
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 12:00:44.347526 3219848 out.go:360] Setting OutFile to fd 1 ...
	I1217 12:00:44.347663 3219848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:00:44.347673 3219848 out.go:374] Setting ErrFile to fd 2...
	I1217 12:00:44.347678 3219848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:00:44.347938 3219848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 12:00:44.348321 3219848 out.go:368] Setting JSON to false
	I1217 12:00:44.349222 3219848 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":63795,"bootTime":1765909050,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 12:00:44.349300 3219848 start.go:143] virtualization:  
	I1217 12:00:44.352466 3219848 out.go:179] * [newest-cni-669680] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 12:00:44.356190 3219848 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 12:00:44.356282 3219848 notify.go:221] Checking for updates...
	I1217 12:00:44.362135 3219848 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 12:00:44.365177 3219848 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 12:00:44.368881 3219848 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 12:00:44.372015 3219848 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 12:00:44.375014 3219848 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 12:00:44.378336 3219848 config.go:182] Loaded profile config "newest-cni-669680": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 12:00:44.378951 3219848 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 12:00:44.413369 3219848 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 12:00:44.413513 3219848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 12:00:44.473970 3219848 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 12:00:44.464532408 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 12:00:44.474081 3219848 docker.go:319] overlay module found
	I1217 12:00:44.477205 3219848 out.go:179] * Using the docker driver based on existing profile
	I1217 12:00:44.480155 3219848 start.go:309] selected driver: docker
	I1217 12:00:44.480182 3219848 start.go:927] validating driver "docker" against &{Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:00:44.480300 3219848 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 12:00:44.481122 3219848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 12:00:44.568687 3219848 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 12:00:44.559079636 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 12:00:44.569054 3219848 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 12:00:44.569088 3219848 cni.go:84] Creating CNI manager for ""
	I1217 12:00:44.569145 3219848 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 12:00:44.569196 3219848 start.go:353] cluster config:
	{Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:00:44.574245 3219848 out.go:179] * Starting "newest-cni-669680" primary control-plane node in "newest-cni-669680" cluster
	I1217 12:00:44.576964 3219848 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 12:00:44.579814 3219848 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 12:00:44.582545 3219848 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 12:00:44.582593 3219848 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1217 12:00:44.582604 3219848 cache.go:65] Caching tarball of preloaded images
	I1217 12:00:44.582624 3219848 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 12:00:44.582700 3219848 preload.go:238] Found /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 12:00:44.582711 3219848 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1217 12:00:44.582826 3219848 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/config.json ...
	I1217 12:00:44.602190 3219848 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 12:00:44.602216 3219848 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 12:00:44.602262 3219848 cache.go:243] Successfully downloaded all kic artifacts
	I1217 12:00:44.602326 3219848 start.go:360] acquireMachinesLock for newest-cni-669680: {Name:mk48c8383b245a4b70f2208fe2e76b80693bbb09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 12:00:44.602428 3219848 start.go:364] duration metric: took 68.29µs to acquireMachinesLock for "newest-cni-669680"
	I1217 12:00:44.602457 3219848 start.go:96] Skipping create...Using existing machine configuration
	I1217 12:00:44.602505 3219848 fix.go:54] fixHost starting: 
	I1217 12:00:44.602917 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:44.620734 3219848 fix.go:112] recreateIfNeeded on newest-cni-669680: state=Stopped err=<nil>
	W1217 12:00:44.620765 3219848 fix.go:138] unexpected machine state, will restart: <nil>
	W1217 12:00:44.760258 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:46.760539 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:00:44.623987 3219848 out.go:252] * Restarting existing docker container for "newest-cni-669680" ...
	I1217 12:00:44.624072 3219848 cli_runner.go:164] Run: docker start newest-cni-669680
	I1217 12:00:44.870900 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:44.893559 3219848 kic.go:432] container "newest-cni-669680" state is running.
	I1217 12:00:44.894282 3219848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 12:00:44.917205 3219848 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/config.json ...
	I1217 12:00:44.917570 3219848 machine.go:94] provisionDockerMachine start ...
	I1217 12:00:44.917645 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:44.945980 3219848 main.go:143] libmachine: Using SSH client type: native
	I1217 12:00:44.946096 3219848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36053 <nil> <nil>}
	I1217 12:00:44.946104 3219848 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 12:00:44.946864 3219848 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 12:00:48.084367 3219848 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-669680
	
	I1217 12:00:48.084399 3219848 ubuntu.go:182] provisioning hostname "newest-cni-669680"
	I1217 12:00:48.084507 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:48.104367 3219848 main.go:143] libmachine: Using SSH client type: native
	I1217 12:00:48.104656 3219848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36053 <nil> <nil>}
	I1217 12:00:48.104680 3219848 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-669680 && echo "newest-cni-669680" | sudo tee /etc/hostname
	I1217 12:00:48.247265 3219848 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-669680
	
	I1217 12:00:48.247353 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:48.270652 3219848 main.go:143] libmachine: Using SSH client type: native
	I1217 12:00:48.270788 3219848 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36053 <nil> <nil>}
	I1217 12:00:48.270817 3219848 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-669680' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-669680/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-669680' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 12:00:48.417473 3219848 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 12:00:48.417557 3219848 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 12:00:48.417596 3219848 ubuntu.go:190] setting up certificates
	I1217 12:00:48.417639 3219848 provision.go:84] configureAuth start
	I1217 12:00:48.417749 3219848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 12:00:48.437471 3219848 provision.go:143] copyHostCerts
	I1217 12:00:48.437568 3219848 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 12:00:48.437587 3219848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 12:00:48.437717 3219848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 12:00:48.437858 3219848 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 12:00:48.437877 3219848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 12:00:48.437916 3219848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 12:00:48.438005 3219848 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 12:00:48.438028 3219848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 12:00:48.438055 3219848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 12:00:48.438157 3219848 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.newest-cni-669680 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-669680]
	I1217 12:00:48.577436 3219848 provision.go:177] copyRemoteCerts
	I1217 12:00:48.577506 3219848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 12:00:48.577546 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:48.595338 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:48.692538 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 12:00:48.711734 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 12:00:48.729881 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 12:00:48.748237 3219848 provision.go:87] duration metric: took 330.555362ms to configureAuth
	I1217 12:00:48.748262 3219848 ubuntu.go:206] setting minikube options for container-runtime
	I1217 12:00:48.748550 3219848 config.go:182] Loaded profile config "newest-cni-669680": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 12:00:48.748561 3219848 machine.go:97] duration metric: took 3.830976751s to provisionDockerMachine
	I1217 12:00:48.748569 3219848 start.go:293] postStartSetup for "newest-cni-669680" (driver="docker")
	I1217 12:00:48.748581 3219848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 12:00:48.748643 3219848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 12:00:48.748683 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:48.766578 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:48.864654 3219848 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 12:00:48.868220 3219848 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 12:00:48.868249 3219848 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 12:00:48.868261 3219848 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 12:00:48.868318 3219848 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 12:00:48.868401 3219848 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 12:00:48.868523 3219848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 12:00:48.876210 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 12:00:48.894408 3219848 start.go:296] duration metric: took 145.823675ms for postStartSetup
	I1217 12:00:48.894507 3219848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 12:00:48.894563 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:48.913872 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:49.010734 3219848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 12:00:49.017136 3219848 fix.go:56] duration metric: took 4.414624566s for fixHost
	I1217 12:00:49.017182 3219848 start.go:83] releasing machines lock for "newest-cni-669680", held for 4.414721098s
	I1217 12:00:49.017319 3219848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-669680
	I1217 12:00:49.041576 3219848 ssh_runner.go:195] Run: cat /version.json
	I1217 12:00:49.041642 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:49.041898 3219848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 12:00:49.041972 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:49.071567 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:49.072178 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:49.261249 3219848 ssh_runner.go:195] Run: systemctl --version
	I1217 12:00:49.267897 3219848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 12:00:49.272503 3219848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 12:00:49.272574 3219848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 12:00:49.280715 3219848 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 12:00:49.280743 3219848 start.go:496] detecting cgroup driver to use...
	I1217 12:00:49.280787 3219848 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 12:00:49.280844 3219848 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 12:00:49.298858 3219848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 12:00:49.313120 3219848 docker.go:218] disabling cri-docker service (if available) ...
	I1217 12:00:49.313230 3219848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 12:00:49.329245 3219848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 12:00:49.342531 3219848 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 12:00:49.461223 3219848 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 12:00:49.579409 3219848 docker.go:234] disabling docker service ...
	I1217 12:00:49.579510 3219848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 12:00:49.594800 3219848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 12:00:49.608313 3219848 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 12:00:49.737460 3219848 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 12:00:49.883222 3219848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 12:00:49.897339 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 12:00:49.911914 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 12:00:49.921268 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 12:00:49.930257 3219848 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 12:00:49.930398 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 12:00:49.939639 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 12:00:49.948689 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 12:00:49.958342 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 12:00:49.967395 3219848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 12:00:49.975730 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 12:00:49.984582 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 12:00:49.993553 3219848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 12:00:50.009983 3219848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 12:00:50.019753 3219848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 12:00:50.028837 3219848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:00:50.142686 3219848 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 12:00:50.264183 3219848 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 12:00:50.264308 3219848 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 12:00:50.268160 3219848 start.go:564] Will wait 60s for crictl version
	I1217 12:00:50.268261 3219848 ssh_runner.go:195] Run: which crictl
	I1217 12:00:50.271790 3219848 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 12:00:50.298148 3219848 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 12:00:50.298258 3219848 ssh_runner.go:195] Run: containerd --version
	I1217 12:00:50.318643 3219848 ssh_runner.go:195] Run: containerd --version
	I1217 12:00:50.346609 3219848 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1217 12:00:50.349545 3219848 cli_runner.go:164] Run: docker network inspect newest-cni-669680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 12:00:50.366603 3219848 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 12:00:50.370482 3219848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 12:00:50.383622 3219848 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 12:00:50.386526 3219848 kubeadm.go:884] updating cluster {Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 12:00:50.386672 3219848 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1217 12:00:50.386774 3219848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:00:50.415106 3219848 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 12:00:50.415132 3219848 containerd.go:534] Images already preloaded, skipping extraction
	I1217 12:00:50.415224 3219848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:00:50.444492 3219848 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 12:00:50.444517 3219848 cache_images.go:86] Images are preloaded, skipping loading
	I1217 12:00:50.444526 3219848 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1217 12:00:50.444639 3219848 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-669680 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 12:00:50.444718 3219848 ssh_runner.go:195] Run: sudo crictl info
	I1217 12:00:50.471453 3219848 cni.go:84] Creating CNI manager for ""
	I1217 12:00:50.471478 3219848 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 12:00:50.471497 3219848 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 12:00:50.471553 3219848 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-669680 NodeName:newest-cni-669680 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 12:00:50.471711 3219848 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-669680"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 12:00:50.471828 3219848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 12:00:50.480867 3219848 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 12:00:50.480998 3219848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 12:00:50.488686 3219848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1217 12:00:50.504356 3219848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 12:00:50.520176 3219848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1217 12:00:50.535930 3219848 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 12:00:50.540134 3219848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 12:00:50.550629 3219848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:00:50.669384 3219848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 12:00:50.685420 3219848 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680 for IP: 192.168.76.2
	I1217 12:00:50.685479 3219848 certs.go:195] generating shared ca certs ...
	I1217 12:00:50.685497 3219848 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:00:50.685634 3219848 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 12:00:50.685683 3219848 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 12:00:50.685690 3219848 certs.go:257] generating profile certs ...
	I1217 12:00:50.685787 3219848 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/client.key
	I1217 12:00:50.685851 3219848 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key.e7646161
	I1217 12:00:50.685893 3219848 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key
	I1217 12:00:50.686084 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 12:00:50.686149 3219848 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 12:00:50.686177 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 12:00:50.686225 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 12:00:50.686286 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 12:00:50.686340 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 12:00:50.686422 3219848 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 12:00:50.687047 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 12:00:50.710384 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 12:00:50.730920 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 12:00:50.751265 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 12:00:50.772018 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 12:00:50.790833 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 12:00:50.810114 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 12:00:50.828402 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/newest-cni-669680/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 12:00:50.846753 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 12:00:50.865705 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 12:00:50.886567 3219848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 12:00:50.904533 3219848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 12:00:50.917457 3219848 ssh_runner.go:195] Run: openssl version
	I1217 12:00:50.923993 3219848 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:00:50.931839 3219848 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 12:00:50.939507 3219848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:00:50.943237 3219848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:00:50.943304 3219848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:00:50.984637 3219848 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 12:00:50.992168 3219848 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 12:00:50.999795 3219848 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 12:00:51.020372 3219848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 12:00:51.024379 3219848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 12:00:51.024566 3219848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 12:00:51.066006 3219848 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 12:00:51.074211 3219848 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 12:00:51.082049 3219848 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 12:00:51.090651 3219848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 12:00:51.094888 3219848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 12:00:51.095004 3219848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 12:00:51.137313 3219848 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 12:00:51.145186 3219848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 12:00:51.149385 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 12:00:51.191456 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 12:00:51.232840 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 12:00:51.275219 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 12:00:51.317313 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 12:00:51.358746 3219848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 12:00:51.399851 3219848 kubeadm.go:401] StartCluster: {Name:newest-cni-669680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-669680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:00:51.399946 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 12:00:51.400058 3219848 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 12:00:51.427405 3219848 cri.go:89] found id: ""
	I1217 12:00:51.427480 3219848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 12:00:51.435564 3219848 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 12:00:51.435593 3219848 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 12:00:51.435648 3219848 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 12:00:51.443379 3219848 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 12:00:51.443986 3219848 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-669680" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 12:00:51.444236 3219848 kubeconfig.go:62] /home/jenkins/minikube-integration/22182-2922712/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-669680" cluster setting kubeconfig missing "newest-cni-669680" context setting]
	I1217 12:00:51.444696 3219848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:00:51.446096 3219848 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 12:00:51.454141 3219848 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1217 12:00:51.454214 3219848 kubeadm.go:602] duration metric: took 18.613293ms to restartPrimaryControlPlane
	I1217 12:00:51.454230 3219848 kubeadm.go:403] duration metric: took 54.392206ms to StartCluster
	I1217 12:00:51.454245 3219848 settings.go:142] acquiring lock: {Name:mkbe14d68dd5ff3fa1749157c50305d115f5a24d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:00:51.454304 3219848 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 12:00:51.455245 3219848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:00:51.455481 3219848 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 12:00:51.455797 3219848 config.go:182] Loaded profile config "newest-cni-669680": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 12:00:51.455846 3219848 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 12:00:51.455911 3219848 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-669680"
	I1217 12:00:51.455924 3219848 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-669680"
	I1217 12:00:51.455953 3219848 host.go:66] Checking if "newest-cni-669680" exists ...
	I1217 12:00:51.456410 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:51.456591 3219848 addons.go:70] Setting dashboard=true in profile "newest-cni-669680"
	I1217 12:00:51.457002 3219848 addons.go:239] Setting addon dashboard=true in "newest-cni-669680"
	W1217 12:00:51.457012 3219848 addons.go:248] addon dashboard should already be in state true
	I1217 12:00:51.457034 3219848 host.go:66] Checking if "newest-cni-669680" exists ...
	I1217 12:00:51.457458 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:51.456605 3219848 addons.go:70] Setting default-storageclass=true in profile "newest-cni-669680"
	I1217 12:00:51.458033 3219848 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-669680"
	I1217 12:00:51.458306 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:51.460659 3219848 out.go:179] * Verifying Kubernetes components...
	I1217 12:00:51.463611 3219848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:00:51.495379 3219848 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 12:00:51.502753 3219848 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:00:51.502777 3219848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 12:00:51.502845 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:51.511997 3219848 addons.go:239] Setting addon default-storageclass=true in "newest-cni-669680"
	I1217 12:00:51.512038 3219848 host.go:66] Checking if "newest-cni-669680" exists ...
	I1217 12:00:51.512543 3219848 cli_runner.go:164] Run: docker container inspect newest-cni-669680 --format={{.State.Status}}
	I1217 12:00:51.527586 3219848 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 12:00:51.536600 3219848 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1217 12:00:49.260592 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:51.760613 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:00:51.539513 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 12:00:51.539539 3219848 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 12:00:51.539612 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:51.555471 3219848 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 12:00:51.555502 3219848 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 12:00:51.555570 3219848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-669680
	I1217 12:00:51.569622 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:51.592016 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:51.601832 3219848 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36053 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/newest-cni-669680/id_ed25519 Username:docker}
	I1217 12:00:51.689678 3219848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 12:00:51.731294 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:00:51.749491 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:00:51.814469 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 12:00:51.814496 3219848 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 12:00:51.839602 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 12:00:51.839672 3219848 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 12:00:51.852764 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 12:00:51.852827 3219848 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 12:00:51.865089 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 12:00:51.865152 3219848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 12:00:51.878190 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 12:00:51.878259 3219848 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 12:00:51.890831 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 12:00:51.890854 3219848 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 12:00:51.903270 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 12:00:51.903294 3219848 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 12:00:51.916127 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 12:00:51.916153 3219848 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 12:00:51.929059 3219848 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 12:00:51.929123 3219848 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 12:00:51.942273 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:52.502896 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.502968 3219848 retry.go:31] will retry after 269.884821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:00:52.503026 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.503067 3219848 retry.go:31] will retry after 319.702383ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.503040 3219848 api_server.go:52] waiting for apiserver process to appear ...
	I1217 12:00:52.503258 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:00:52.503300 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.503321 3219848 retry.go:31] will retry after 196.810414ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.700893 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:52.770562 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.770599 3219848 retry.go:31] will retry after 481.518663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.773838 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:00:52.823221 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:00:52.855276 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.855328 3219848 retry.go:31] will retry after 391.667259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:00:52.894877 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:52.894917 3219848 retry.go:31] will retry after 200.928151ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.004579 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:53.096394 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:00:53.155868 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.155897 3219848 retry.go:31] will retry after 564.238822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.248228 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:00:53.253066 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:53.368787 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.368822 3219848 retry.go:31] will retry after 377.070742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:00:53.369052 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.369071 3219848 retry.go:31] will retry after 485.691157ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.504052 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:53.720468 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:00:53.746162 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:00:53.794993 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.795027 3219848 retry.go:31] will retry after 872.052872ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:00:53.811480 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.811533 3219848 retry.go:31] will retry after 558.92589ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.855758 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:53.922708 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:53.922745 3219848 retry.go:31] will retry after 803.451465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:54.003704 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:00:54.260476 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:00:56.760549 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:00:54.370776 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:00:54.437621 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:54.437652 3219848 retry.go:31] will retry after 1.190014231s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:54.503835 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:54.667963 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:00:54.726498 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:54.728210 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:54.728287 3219848 retry.go:31] will retry after 1.413986656s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:00:54.813279 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:54.813372 3219848 retry.go:31] will retry after 1.840693776s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:55.005986 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:55.504112 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:55.628242 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:00:55.689054 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:55.689136 3219848 retry.go:31] will retry after 1.799425819s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:56.003624 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:56.142943 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:00:56.205592 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:56.205625 3219848 retry.go:31] will retry after 2.655712888s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:56.503981 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:56.654730 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:56.717604 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:56.717641 3219848 retry.go:31] will retry after 1.909418395s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:57.004223 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:57.489437 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:00:57.503984 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:00:57.562808 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:57.562840 3219848 retry.go:31] will retry after 3.72719526s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:58.014740 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:58.503409 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:00:58.627253 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:00:58.690443 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:58.690481 3219848 retry.go:31] will retry after 3.549926007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:58.861704 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:00:58.923654 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:58.923683 3219848 retry.go:31] will retry after 2.058003245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:00:59.003967 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:00:59.260028 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:01.761273 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:00:59.504167 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:00.018808 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:00.504031 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:00.982724 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:01:01.004335 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:01.111365 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:01.111399 3219848 retry.go:31] will retry after 3.900095446s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:01.291002 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:01:01.368946 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:01.368996 3219848 retry.go:31] will retry after 3.675584678s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:01.503381 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:02.004403 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:02.241403 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:01:02.307939 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:02.307978 3219848 retry.go:31] will retry after 5.738469139s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:02.504084 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:03.003562 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:03.503472 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:04.005140 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:04.259626 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:06.260640 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:08.759809 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:04.503830 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:05.003702 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:05.012660 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:01:05.045335 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:01:05.083423 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:05.083461 3219848 retry.go:31] will retry after 9.235586003s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:01:05.118369 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:05.118401 3219848 retry.go:31] will retry after 3.828272571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:05.503857 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:06.003637 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:06.504078 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:07.003401 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:07.503344 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:08.004170 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:08.047658 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:01:08.113675 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:08.113710 3219848 retry.go:31] will retry after 7.390134832s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:08.504355 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:08.946950 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:01:09.003509 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:09.011595 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:09.011629 3219848 retry.go:31] will retry after 14.170665244s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:01:11.259781 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:13.760361 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:09.503956 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:10.018957 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:10.503456 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:11.004169 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:11.503808 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:12.003522 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:12.503603 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:13.003862 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:13.503472 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:14.004363 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:14.319308 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:01:16.260406 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:18.759622 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:14.385208 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:14.385243 3219848 retry.go:31] will retry after 5.459360953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:14.503378 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:15.006355 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:15.504086 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:15.504108 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:01:15.572879 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:15.572915 3219848 retry.go:31] will retry after 11.777794795s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:16.005530 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:16.503503 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:17.003649 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:17.503430 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:18.005004 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:18.504088 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:19.003423 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:20.760668 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:23.259693 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:19.503667 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:19.845708 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:01:19.909350 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:19.909381 3219848 retry.go:31] will retry after 9.722081791s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:20.003736 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:20.503967 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:21.004457 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:21.504148 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:22.003426 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:22.504235 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:23.004166 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:23.183313 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:01:23.244255 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:23.244289 3219848 retry.go:31] will retry after 19.619062537s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:23.503427 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:24.006966 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:25.259753 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:27.759647 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:24.503758 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:25.004125 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:25.503463 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:26.004155 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:26.504576 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:27.003556 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:27.351598 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:01:27.419162 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:27.419195 3219848 retry.go:31] will retry after 15.164194741s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:27.503619 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:28.003385 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:28.503474 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:29.004314 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:29.760524 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:32.259673 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:29.503968 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:29.632290 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:01:29.699987 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:29.700018 3219848 retry.go:31] will retry after 12.658501476s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:30.003430 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:30.503407 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:31.003818 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:31.504094 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:32.003845 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:32.503410 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:33.005413 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:33.503962 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:34.003405 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:34.259722 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:36.759694 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:34.503770 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:35.004969 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:35.504211 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:36.003492 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:36.503881 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:37.008063 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:37.504267 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:38.004154 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:38.504195 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:39.005022 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:39.260642 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:41.759666 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:39.504074 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:40.009459 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:40.504054 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:41.004134 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:41.504134 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:42.003867 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:42.359033 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:01:42.424319 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:42.424350 3219848 retry.go:31] will retry after 39.499798177s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:42.503565 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:42.584549 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:01:42.654579 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:42.654612 3219848 retry.go:31] will retry after 22.182784721s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:42.864124 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:01:42.925874 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:42.925916 3219848 retry.go:31] will retry after 18.241160237s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:01:43.004102 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:43.504356 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:44.004028 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:44.259623 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:46.260805 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:48.760674 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:44.503929 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:45.003640 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:45.503747 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:46.003443 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:46.503967 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:47.003372 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:47.503601 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:48.003536 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:48.503987 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:49.003434 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 12:01:51.260164 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:53.759783 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:49.504162 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:50.003493 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:50.503875 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:51.004324 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:51.503888 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:01:51.503983 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:01:51.536666 3219848 cri.go:89] found id: ""
	I1217 12:01:51.536689 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.536698 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:01:51.536704 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:01:51.536768 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:01:51.562047 3219848 cri.go:89] found id: ""
	I1217 12:01:51.562070 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.562078 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:01:51.562084 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:01:51.562149 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:01:51.586286 3219848 cri.go:89] found id: ""
	I1217 12:01:51.586309 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.586317 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:01:51.586323 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:01:51.586381 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:01:51.611834 3219848 cri.go:89] found id: ""
	I1217 12:01:51.611858 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.611867 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:01:51.611873 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:01:51.611942 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:01:51.637620 3219848 cri.go:89] found id: ""
	I1217 12:01:51.637643 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.637651 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:01:51.637658 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:01:51.637715 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:01:51.663176 3219848 cri.go:89] found id: ""
	I1217 12:01:51.663198 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.663206 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:01:51.663212 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:01:51.663273 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:01:51.688038 3219848 cri.go:89] found id: ""
	I1217 12:01:51.688064 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.688083 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:01:51.688090 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:01:51.688159 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:01:51.715834 3219848 cri.go:89] found id: ""
	I1217 12:01:51.715860 3219848 logs.go:282] 0 containers: []
	W1217 12:01:51.715870 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:01:51.715879 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:01:51.715890 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:01:51.772533 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:01:51.772567 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:01:51.788370 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:01:51.788400 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:01:51.855552 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:01:51.847275    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.848081    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.849574    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.849998    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.851493    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:01:51.847275    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.848081    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.849574    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.849998    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:51.851493    1839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:01:51.855615 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:01:51.855635 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:01:51.880660 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:01:51.880693 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 12:01:56.259727 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	W1217 12:01:58.760523 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:01:54.414807 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:54.425488 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:01:54.425558 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:01:54.453841 3219848 cri.go:89] found id: ""
	I1217 12:01:54.453870 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.453880 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:01:54.453887 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:01:54.453946 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:01:54.478957 3219848 cri.go:89] found id: ""
	I1217 12:01:54.478982 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.478991 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:01:54.478998 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:01:54.479060 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:01:54.504488 3219848 cri.go:89] found id: ""
	I1217 12:01:54.504516 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.504535 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:01:54.504543 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:01:54.504606 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:01:54.529418 3219848 cri.go:89] found id: ""
	I1217 12:01:54.529445 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.529454 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:01:54.529460 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:01:54.529519 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:01:54.557757 3219848 cri.go:89] found id: ""
	I1217 12:01:54.557781 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.557790 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:01:54.557797 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:01:54.557854 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:01:54.586961 3219848 cri.go:89] found id: ""
	I1217 12:01:54.586996 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.587004 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:01:54.587011 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:01:54.587077 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:01:54.612590 3219848 cri.go:89] found id: ""
	I1217 12:01:54.612617 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.612626 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:01:54.612633 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:01:54.612694 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:01:54.638207 3219848 cri.go:89] found id: ""
	I1217 12:01:54.638234 3219848 logs.go:282] 0 containers: []
	W1217 12:01:54.638243 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:01:54.638253 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:01:54.638264 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:01:54.695917 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:01:54.695955 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:01:54.712729 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:01:54.712759 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:01:54.782298 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:01:54.774102    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.774684    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.776463    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.776850    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.778510    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:01:54.774102    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.774684    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.776463    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.776850    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:54.778510    1954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:01:54.782321 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:01:54.782333 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:01:54.807165 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:01:54.807196 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:01:57.336099 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:01:57.346978 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:01:57.347048 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:01:57.371132 3219848 cri.go:89] found id: ""
	I1217 12:01:57.371155 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.371163 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:01:57.371169 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:01:57.371232 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:01:57.396905 3219848 cri.go:89] found id: ""
	I1217 12:01:57.396933 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.396942 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:01:57.396948 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:01:57.397011 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:01:57.425337 3219848 cri.go:89] found id: ""
	I1217 12:01:57.425366 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.425374 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:01:57.425381 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:01:57.425440 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:01:57.449681 3219848 cri.go:89] found id: ""
	I1217 12:01:57.449709 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.449718 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:01:57.449725 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:01:57.449784 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:01:57.475302 3219848 cri.go:89] found id: ""
	I1217 12:01:57.475328 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.475337 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:01:57.475343 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:01:57.475412 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:01:57.500270 3219848 cri.go:89] found id: ""
	I1217 12:01:57.500344 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.500369 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:01:57.500389 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:01:57.500509 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:01:57.527492 3219848 cri.go:89] found id: ""
	I1217 12:01:57.527519 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.527532 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:01:57.527538 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:01:57.527650 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:01:57.553482 3219848 cri.go:89] found id: ""
	I1217 12:01:57.553549 3219848 logs.go:282] 0 containers: []
	W1217 12:01:57.553576 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:01:57.553602 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:01:57.553627 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:01:57.609257 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:01:57.609292 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:01:57.625325 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:01:57.625352 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:01:57.691022 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:01:57.682604    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.683106    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.684793    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.685506    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.687043    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:01:57.682604    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.683106    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.684793    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.685506    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:01:57.687043    2067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:01:57.691048 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:01:57.691061 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:01:57.716301 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:01:57.716333 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 12:02:01.260216 3212985 node_ready.go:55] error getting node "no-preload-118262" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-118262": dial tcp 192.168.85.2:8443: connect: connection refused
	I1217 12:02:02.764189 3212985 node_ready.go:38] duration metric: took 6m0.005070756s for node "no-preload-118262" to be "Ready" ...
	I1217 12:02:02.767452 3212985 out.go:203] 
	W1217 12:02:02.770608 3212985 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 12:02:02.770638 3212985 out.go:285] * 
	W1217 12:02:02.772986 3212985 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 12:02:02.776078 3212985 out.go:203] 
	I1217 12:02:00.244802 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:00.315692 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:00.315780 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:00.376798 3219848 cri.go:89] found id: ""
	I1217 12:02:00.376842 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.376852 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:00.376859 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:00.376949 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:00.414474 3219848 cri.go:89] found id: ""
	I1217 12:02:00.414502 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.414513 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:00.414520 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:00.414590 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:00.447266 3219848 cri.go:89] found id: ""
	I1217 12:02:00.447306 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.447316 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:00.447323 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:00.447415 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:00.477352 3219848 cri.go:89] found id: ""
	I1217 12:02:00.477378 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.477387 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:00.477394 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:00.477457 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:00.506577 3219848 cri.go:89] found id: ""
	I1217 12:02:00.506605 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.506614 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:00.506621 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:00.506720 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:00.533943 3219848 cri.go:89] found id: ""
	I1217 12:02:00.533966 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.533975 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:00.533982 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:00.534051 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:00.560396 3219848 cri.go:89] found id: ""
	I1217 12:02:00.560462 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.560472 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:00.560479 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:00.560573 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:00.587859 3219848 cri.go:89] found id: ""
	I1217 12:02:00.587931 3219848 logs.go:282] 0 containers: []
	W1217 12:02:00.587955 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:00.587983 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:00.588035 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:00.620134 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:00.620217 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:00.677187 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:00.677223 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:00.694138 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:00.694242 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:00.762938 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:00.753622    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.754338    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.755466    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.757073    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.757631    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:00.753622    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.754338    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.755466    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.757073    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:00.757631    2191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:00.763025 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:00.763058 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:01.167394 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:02:01.232118 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:02:01.232151 3219848 retry.go:31] will retry after 39.797194994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:02:03.292559 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:03.304708 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:03.304784 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:03.332491 3219848 cri.go:89] found id: ""
	I1217 12:02:03.332511 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.332519 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:03.332526 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:03.332630 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:03.361080 3219848 cri.go:89] found id: ""
	I1217 12:02:03.361107 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.361115 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:03.361121 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:03.361179 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:03.397354 3219848 cri.go:89] found id: ""
	I1217 12:02:03.397382 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.397391 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:03.397397 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:03.397473 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:03.431465 3219848 cri.go:89] found id: ""
	I1217 12:02:03.431493 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.431502 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:03.431509 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:03.431569 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:03.464102 3219848 cri.go:89] found id: ""
	I1217 12:02:03.464125 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.464133 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:03.464139 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:03.464197 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:03.497848 3219848 cri.go:89] found id: ""
	I1217 12:02:03.497879 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.497888 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:03.497895 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:03.497952 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:03.568108 3219848 cri.go:89] found id: ""
	I1217 12:02:03.568130 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.568139 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:03.568144 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:03.568202 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:03.632108 3219848 cri.go:89] found id: ""
	I1217 12:02:03.632136 3219848 logs.go:282] 0 containers: []
	W1217 12:02:03.632151 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:03.632161 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:03.632173 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:03.724972 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:03.708641    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.709073    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.716627    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.717278    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.719035    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:03.708641    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.709073    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.716627    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.717278    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:03.719035    2293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:03.725000 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:03.725012 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:03.753083 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:03.753174 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:03.790574 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:03.790596 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:03.863404 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:03.863488 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:04.837606 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:02:04.901525 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:02:04.901562 3219848 retry.go:31] will retry after 21.256241349s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 12:02:06.385200 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:06.395642 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:06.395734 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:06.422500 3219848 cri.go:89] found id: ""
	I1217 12:02:06.422526 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.422535 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:06.422542 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:06.422603 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:06.449741 3219848 cri.go:89] found id: ""
	I1217 12:02:06.449763 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.449773 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:06.449779 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:06.449836 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:06.478823 3219848 cri.go:89] found id: ""
	I1217 12:02:06.478844 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.478852 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:06.478858 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:06.478924 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:06.507270 3219848 cri.go:89] found id: ""
	I1217 12:02:06.507298 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.507307 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:06.507313 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:06.507390 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:06.536741 3219848 cri.go:89] found id: ""
	I1217 12:02:06.536774 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.536783 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:06.536790 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:06.536859 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:06.569124 3219848 cri.go:89] found id: ""
	I1217 12:02:06.569152 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.569161 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:06.569168 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:06.569223 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:06.597119 3219848 cri.go:89] found id: ""
	I1217 12:02:06.597140 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.597148 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:06.597155 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:06.597213 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:06.623129 3219848 cri.go:89] found id: ""
	I1217 12:02:06.623152 3219848 logs.go:282] 0 containers: []
	W1217 12:02:06.623161 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:06.623171 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:06.623181 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:06.679634 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:06.679669 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:06.696235 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:06.696273 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:06.764004 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:06.755277    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.755704    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.757595    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.758654    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.760132    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:06.755277    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.755704    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.757595    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.758654    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:06.760132    2422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:06.764031 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:06.764044 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:06.789440 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:06.789478 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:09.319544 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:09.335051 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:09.335144 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:09.363250 3219848 cri.go:89] found id: ""
	I1217 12:02:09.363278 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.363288 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:09.363296 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:09.363357 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:09.387533 3219848 cri.go:89] found id: ""
	I1217 12:02:09.387598 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.387624 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:09.387646 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:09.387735 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:09.411943 3219848 cri.go:89] found id: ""
	I1217 12:02:09.411970 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.411978 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:09.411985 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:09.412042 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:09.438061 3219848 cri.go:89] found id: ""
	I1217 12:02:09.438127 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.438151 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:09.438167 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:09.438250 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:09.463378 3219848 cri.go:89] found id: ""
	I1217 12:02:09.463407 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.463415 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:09.463422 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:09.463481 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:09.494069 3219848 cri.go:89] found id: ""
	I1217 12:02:09.494098 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.494107 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:09.494114 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:09.494178 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:09.526694 3219848 cri.go:89] found id: ""
	I1217 12:02:09.526771 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.526795 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:09.526815 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:09.526923 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:09.553523 3219848 cri.go:89] found id: ""
	I1217 12:02:09.553585 3219848 logs.go:282] 0 containers: []
	W1217 12:02:09.553616 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:09.553641 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:09.553678 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:09.618427 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:09.618463 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:09.634212 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:09.634244 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:09.696895 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:09.688293    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.688801    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.690481    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.690806    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.692990    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:09.688293    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.688801    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.690481    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.690806    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:09.692990    2534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:09.696914 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:09.696926 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:09.722288 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:09.722324 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:12.249861 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:12.261558 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:12.261626 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:12.293092 3219848 cri.go:89] found id: ""
	I1217 12:02:12.293113 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.293121 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:12.293128 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:12.293188 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:12.319347 3219848 cri.go:89] found id: ""
	I1217 12:02:12.319374 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.319384 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:12.319390 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:12.319448 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:12.343912 3219848 cri.go:89] found id: ""
	I1217 12:02:12.343939 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.343948 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:12.343955 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:12.344013 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:12.370544 3219848 cri.go:89] found id: ""
	I1217 12:02:12.370571 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.370581 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:12.370587 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:12.370645 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:12.397552 3219848 cri.go:89] found id: ""
	I1217 12:02:12.397578 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.397587 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:12.397593 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:12.397652 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:12.421606 3219848 cri.go:89] found id: ""
	I1217 12:02:12.421673 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.421699 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:12.421715 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:12.421791 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:12.447065 3219848 cri.go:89] found id: ""
	I1217 12:02:12.447088 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.447097 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:12.447103 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:12.447169 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:12.473547 3219848 cri.go:89] found id: ""
	I1217 12:02:12.473575 3219848 logs.go:282] 0 containers: []
	W1217 12:02:12.473583 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:12.473645 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:12.473670 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:12.489529 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:12.489559 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:12.574945 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:12.562789    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.567073    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.567687    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.569241    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.569901    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:12.562789    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.567073    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.567687    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.569241    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:12.569901    2637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:12.574970 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:12.574986 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:12.601521 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:12.601562 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:12.633893 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:12.633920 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:15.190960 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:15.202334 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:15.202461 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:15.231453 3219848 cri.go:89] found id: ""
	I1217 12:02:15.231486 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.231495 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:15.231507 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:15.231609 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:15.264097 3219848 cri.go:89] found id: ""
	I1217 12:02:15.264120 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.264129 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:15.264135 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:15.264196 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:15.293547 3219848 cri.go:89] found id: ""
	I1217 12:02:15.293574 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.293583 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:15.293589 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:15.293650 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:15.321905 3219848 cri.go:89] found id: ""
	I1217 12:02:15.321968 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.321991 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:15.322013 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:15.322084 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:15.349052 3219848 cri.go:89] found id: ""
	I1217 12:02:15.349085 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.349095 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:15.349102 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:15.349175 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:15.374350 3219848 cri.go:89] found id: ""
	I1217 12:02:15.374377 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.374387 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:15.374394 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:15.374457 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:15.412039 3219848 cri.go:89] found id: ""
	I1217 12:02:15.412066 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.412075 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:15.412082 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:15.412153 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:15.441228 3219848 cri.go:89] found id: ""
	I1217 12:02:15.441255 3219848 logs.go:282] 0 containers: []
	W1217 12:02:15.441265 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:15.441274 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:15.441309 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:15.467564 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:15.467601 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:15.501031 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:15.501100 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:15.564025 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:15.564059 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:15.581879 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:15.581906 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:15.647244 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:15.638661    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.639327    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.641006    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.641615    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.643194    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:15.638661    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.639327    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.641006    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.641615    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:15.643194    2770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:18.147543 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:18.158738 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:18.158817 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:18.184828 3219848 cri.go:89] found id: ""
	I1217 12:02:18.184853 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.184862 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:18.184869 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:18.184931 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:18.211904 3219848 cri.go:89] found id: ""
	I1217 12:02:18.211935 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.211944 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:18.211950 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:18.212010 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:18.237088 3219848 cri.go:89] found id: ""
	I1217 12:02:18.237154 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.237170 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:18.237177 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:18.237239 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:18.278916 3219848 cri.go:89] found id: ""
	I1217 12:02:18.278943 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.278953 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:18.278960 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:18.279018 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:18.307105 3219848 cri.go:89] found id: ""
	I1217 12:02:18.307133 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.307143 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:18.307150 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:18.307210 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:18.336099 3219848 cri.go:89] found id: ""
	I1217 12:02:18.336132 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.336141 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:18.336148 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:18.336217 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:18.362366 3219848 cri.go:89] found id: ""
	I1217 12:02:18.362432 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.362456 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:18.362472 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:18.362547 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:18.388125 3219848 cri.go:89] found id: ""
	I1217 12:02:18.388151 3219848 logs.go:282] 0 containers: []
	W1217 12:02:18.388160 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:18.388169 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:18.388180 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:18.456052 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:18.446941    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.447634    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.449296    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.449839    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.451474    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:18.446941    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.447634    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.449296    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.449839    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:18.451474    2856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:18.456114 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:18.456134 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:18.481868 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:18.481899 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:18.525523 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:18.525600 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:18.594163 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:18.594200 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:21.113595 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:21.124720 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:21.124792 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:21.150373 3219848 cri.go:89] found id: ""
	I1217 12:02:21.150397 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.150406 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:21.150412 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:21.150471 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:21.179044 3219848 cri.go:89] found id: ""
	I1217 12:02:21.179069 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.179078 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:21.179085 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:21.179156 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:21.205105 3219848 cri.go:89] found id: ""
	I1217 12:02:21.205132 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.205141 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:21.205147 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:21.205207 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:21.230210 3219848 cri.go:89] found id: ""
	I1217 12:02:21.230235 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.230243 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:21.230251 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:21.230328 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:21.265026 3219848 cri.go:89] found id: ""
	I1217 12:02:21.265052 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.265061 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:21.265068 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:21.265128 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:21.302976 3219848 cri.go:89] found id: ""
	I1217 12:02:21.303002 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.303017 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:21.303025 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:21.303097 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:21.333258 3219848 cri.go:89] found id: ""
	I1217 12:02:21.333282 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.333292 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:21.333299 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:21.333361 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:21.359283 3219848 cri.go:89] found id: ""
	I1217 12:02:21.359308 3219848 logs.go:282] 0 containers: []
	W1217 12:02:21.359317 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:21.359327 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:21.359338 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:21.416901 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:21.416944 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:21.433045 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:21.433074 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:21.505849 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:21.494474    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.495106    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.496640    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.497253    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.500699    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:21.494474    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.495106    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.496640    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.497253    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:21.500699    2974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:21.505920 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:21.505948 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:21.534970 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:21.535156 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:21.925292 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 12:02:21.990437 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:02:21.990546 3219848 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 12:02:24.077604 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:24.089001 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:24.089072 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:24.120652 3219848 cri.go:89] found id: ""
	I1217 12:02:24.120677 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.120688 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:24.120695 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:24.120755 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:24.147236 3219848 cri.go:89] found id: ""
	I1217 12:02:24.147263 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.147273 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:24.147280 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:24.147339 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:24.173122 3219848 cri.go:89] found id: ""
	I1217 12:02:24.173147 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.173157 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:24.173163 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:24.173223 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:24.207220 3219848 cri.go:89] found id: ""
	I1217 12:02:24.207243 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.207253 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:24.207259 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:24.207324 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:24.232981 3219848 cri.go:89] found id: ""
	I1217 12:02:24.233004 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.233013 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:24.233020 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:24.233087 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:24.266790 3219848 cri.go:89] found id: ""
	I1217 12:02:24.266815 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.266825 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:24.266832 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:24.266896 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:24.299029 3219848 cri.go:89] found id: ""
	I1217 12:02:24.299056 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.299065 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:24.299072 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:24.299150 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:24.332940 3219848 cri.go:89] found id: ""
	I1217 12:02:24.332966 3219848 logs.go:282] 0 containers: []
	W1217 12:02:24.332975 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:24.332984 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:24.332994 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:24.358486 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:24.358520 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:24.395087 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:24.395119 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:24.453543 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:24.453581 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:24.469070 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:24.469100 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:24.547838 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:24.537508    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.538320    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.540311    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.542086    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.542713    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:24.537508    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.538320    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.540311    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.542086    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:24.542713    3105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:26.158720 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 12:02:26.235734 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:02:26.235852 3219848 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 12:02:27.048020 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:27.058730 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:27.058803 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:27.083792 3219848 cri.go:89] found id: ""
	I1217 12:02:27.083815 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.083824 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:27.083831 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:27.083893 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:27.110794 3219848 cri.go:89] found id: ""
	I1217 12:02:27.110820 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.110841 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:27.110865 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:27.110940 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:27.136730 3219848 cri.go:89] found id: ""
	I1217 12:02:27.136760 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.136768 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:27.136775 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:27.136833 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:27.161755 3219848 cri.go:89] found id: ""
	I1217 12:02:27.161780 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.161813 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:27.161819 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:27.161886 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:27.187885 3219848 cri.go:89] found id: ""
	I1217 12:02:27.187912 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.187921 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:27.187928 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:27.187987 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:27.214398 3219848 cri.go:89] found id: ""
	I1217 12:02:27.214424 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.214432 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:27.214440 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:27.214528 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:27.240617 3219848 cri.go:89] found id: ""
	I1217 12:02:27.240642 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.240652 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:27.240658 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:27.240740 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:27.272907 3219848 cri.go:89] found id: ""
	I1217 12:02:27.272985 3219848 logs.go:282] 0 containers: []
	W1217 12:02:27.273008 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:27.273034 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:27.273061 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:27.338834 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:27.338872 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:27.355488 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:27.355518 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:27.425201 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:27.415325    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.415952    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.418308    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.419305    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.420311    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:27.415325    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.415952    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.418308    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.419305    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:27.420311    3210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:27.425231 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:27.425245 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:27.451232 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:27.451264 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:29.988282 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:29.998906 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:29.998982 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:30.032593 3219848 cri.go:89] found id: ""
	I1217 12:02:30.032619 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.032628 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:30.032635 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:30.032703 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:30.065200 3219848 cri.go:89] found id: ""
	I1217 12:02:30.065230 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.065239 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:30.065247 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:30.065319 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:30.100730 3219848 cri.go:89] found id: ""
	I1217 12:02:30.100758 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.100767 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:30.100773 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:30.100837 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:30.127247 3219848 cri.go:89] found id: ""
	I1217 12:02:30.127273 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.127293 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:30.127299 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:30.127380 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:30.156586 3219848 cri.go:89] found id: ""
	I1217 12:02:30.156611 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.156619 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:30.156627 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:30.156692 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:30.182150 3219848 cri.go:89] found id: ""
	I1217 12:02:30.182174 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.182215 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:30.182222 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:30.182285 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:30.209339 3219848 cri.go:89] found id: ""
	I1217 12:02:30.209366 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.209376 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:30.209383 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:30.209443 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:30.235224 3219848 cri.go:89] found id: ""
	I1217 12:02:30.235250 3219848 logs.go:282] 0 containers: []
	W1217 12:02:30.235259 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:30.235268 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:30.235279 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:30.305932 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:30.297455    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.298291    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.300025    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.300319    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.301797    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:30.297455    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.298291    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.300025    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.300319    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:30.301797    3315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:30.305955 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:30.305968 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:30.335249 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:30.335282 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:30.366831 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:30.366859 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:30.423045 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:30.423081 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:32.941855 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:32.953974 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:32.954052 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:32.986211 3219848 cri.go:89] found id: ""
	I1217 12:02:32.986233 3219848 logs.go:282] 0 containers: []
	W1217 12:02:32.986242 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:32.986249 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:32.986333 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:33.015180 3219848 cri.go:89] found id: ""
	I1217 12:02:33.015209 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.015218 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:33.015227 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:33.015292 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:33.043066 3219848 cri.go:89] found id: ""
	I1217 12:02:33.043132 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.043182 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:33.043216 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:33.043303 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:33.070150 3219848 cri.go:89] found id: ""
	I1217 12:02:33.070178 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.070187 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:33.070194 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:33.070254 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:33.099464 3219848 cri.go:89] found id: ""
	I1217 12:02:33.099502 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.099511 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:33.099519 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:33.099592 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:33.125134 3219848 cri.go:89] found id: ""
	I1217 12:02:33.125161 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.125170 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:33.125177 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:33.125238 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:33.152585 3219848 cri.go:89] found id: ""
	I1217 12:02:33.152608 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.152617 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:33.152638 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:33.152703 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:33.177715 3219848 cri.go:89] found id: ""
	I1217 12:02:33.177740 3219848 logs.go:282] 0 containers: []
	W1217 12:02:33.177749 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:33.177759 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:33.177770 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:33.234986 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:33.235024 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:33.255146 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:33.255186 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:33.339613 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:33.330741    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.331497    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.333272    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.333726    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.334950    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:33.330741    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.331497    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.333272    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.333726    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:33.334950    3433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:33.339647 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:33.339660 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:33.366064 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:33.366101 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:35.894549 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:35.904950 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:35.905022 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:35.933462 3219848 cri.go:89] found id: ""
	I1217 12:02:35.933485 3219848 logs.go:282] 0 containers: []
	W1217 12:02:35.933493 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:35.933499 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:35.933558 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:35.958161 3219848 cri.go:89] found id: ""
	I1217 12:02:35.958228 3219848 logs.go:282] 0 containers: []
	W1217 12:02:35.958254 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:35.958275 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:35.958364 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:35.983016 3219848 cri.go:89] found id: ""
	I1217 12:02:35.983041 3219848 logs.go:282] 0 containers: []
	W1217 12:02:35.983051 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:35.983057 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:35.983126 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:36.015482 3219848 cri.go:89] found id: ""
	I1217 12:02:36.015527 3219848 logs.go:282] 0 containers: []
	W1217 12:02:36.015536 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:36.015543 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:36.015620 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:36.046357 3219848 cri.go:89] found id: ""
	I1217 12:02:36.046393 3219848 logs.go:282] 0 containers: []
	W1217 12:02:36.046406 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:36.046416 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:36.046577 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:36.072553 3219848 cri.go:89] found id: ""
	I1217 12:02:36.072587 3219848 logs.go:282] 0 containers: []
	W1217 12:02:36.072596 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:36.072602 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:36.072662 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:36.099878 3219848 cri.go:89] found id: ""
	I1217 12:02:36.099911 3219848 logs.go:282] 0 containers: []
	W1217 12:02:36.099927 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:36.099934 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:36.100024 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:36.129180 3219848 cri.go:89] found id: ""
	I1217 12:02:36.129203 3219848 logs.go:282] 0 containers: []
	W1217 12:02:36.129212 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:36.129221 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:36.129234 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:36.186216 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:36.186254 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:36.203136 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:36.203166 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:36.273412 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:36.264653    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.265536    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.267226    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.267782    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.269421    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:36.264653    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.265536    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.267226    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.267782    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:36.269421    3546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:36.273433 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:36.273446 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:36.300346 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:36.300378 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:38.840293 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:38.851323 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:38.851395 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:38.878324 3219848 cri.go:89] found id: ""
	I1217 12:02:38.878347 3219848 logs.go:282] 0 containers: []
	W1217 12:02:38.878356 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:38.878362 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:38.878418 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:38.904803 3219848 cri.go:89] found id: ""
	I1217 12:02:38.904824 3219848 logs.go:282] 0 containers: []
	W1217 12:02:38.904833 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:38.904839 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:38.904897 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:38.929044 3219848 cri.go:89] found id: ""
	I1217 12:02:38.929067 3219848 logs.go:282] 0 containers: []
	W1217 12:02:38.929075 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:38.929081 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:38.929148 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:38.953075 3219848 cri.go:89] found id: ""
	I1217 12:02:38.953101 3219848 logs.go:282] 0 containers: []
	W1217 12:02:38.953109 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:38.953119 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:38.953179 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:38.982538 3219848 cri.go:89] found id: ""
	I1217 12:02:38.982560 3219848 logs.go:282] 0 containers: []
	W1217 12:02:38.982569 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:38.982575 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:38.982634 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:39.009774 3219848 cri.go:89] found id: ""
	I1217 12:02:39.009797 3219848 logs.go:282] 0 containers: []
	W1217 12:02:39.009806 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:39.009813 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:39.009877 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:39.035772 3219848 cri.go:89] found id: ""
	I1217 12:02:39.035848 3219848 logs.go:282] 0 containers: []
	W1217 12:02:39.035872 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:39.035894 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:39.035966 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:39.070261 3219848 cri.go:89] found id: ""
	I1217 12:02:39.070282 3219848 logs.go:282] 0 containers: []
	W1217 12:02:39.070291 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:39.070299 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:39.070311 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:39.086150 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:39.086228 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:39.158855 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:39.150093    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.151044    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.152764    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.153406    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.155059    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:39.150093    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.151044    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.152764    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.153406    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:39.155059    3656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:39.158917 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:39.158948 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:39.184120 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:39.184154 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:39.228401 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:39.228446 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:41.030449 3219848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 12:02:41.099078 3219848 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 12:02:41.099186 3219848 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 12:02:41.102220 3219848 out.go:179] * Enabled addons: 
	I1217 12:02:41.105179 3219848 addons.go:530] duration metric: took 1m49.649331261s for enable addons: enabled=[]
	I1217 12:02:41.789011 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:41.800666 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:41.800741 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:41.831181 3219848 cri.go:89] found id: ""
	I1217 12:02:41.831214 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.831222 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:41.831229 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:41.831292 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:41.855868 3219848 cri.go:89] found id: ""
	I1217 12:02:41.855893 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.855901 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:41.855909 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:41.855970 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:41.880077 3219848 cri.go:89] found id: ""
	I1217 12:02:41.880102 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.880110 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:41.880117 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:41.880174 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:41.904526 3219848 cri.go:89] found id: ""
	I1217 12:02:41.904553 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.904562 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:41.904568 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:41.904630 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:41.930234 3219848 cri.go:89] found id: ""
	I1217 12:02:41.930257 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.930266 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:41.930272 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:41.930329 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:41.958809 3219848 cri.go:89] found id: ""
	I1217 12:02:41.958835 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.958844 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:41.958851 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:41.958909 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:41.983616 3219848 cri.go:89] found id: ""
	I1217 12:02:41.983642 3219848 logs.go:282] 0 containers: []
	W1217 12:02:41.983652 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:41.983658 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:41.983723 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:42.011680 3219848 cri.go:89] found id: ""
	I1217 12:02:42.011705 3219848 logs.go:282] 0 containers: []
	W1217 12:02:42.011714 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:42.011725 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:42.011736 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:42.073172 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:42.073215 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:42.092098 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:42.092139 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:42.170615 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:42.158978    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.160329    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.161071    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.163397    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.164052    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:42.158978    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.160329    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.161071    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.163397    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:42.164052    3779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:42.170644 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:42.170669 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:42.200096 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:42.200137 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:44.738108 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:44.751949 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:44.752049 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:44.785830 3219848 cri.go:89] found id: ""
	I1217 12:02:44.785869 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.785902 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:44.785911 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:44.785988 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:44.815102 3219848 cri.go:89] found id: ""
	I1217 12:02:44.815138 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.815148 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:44.815154 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:44.815256 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:44.843623 3219848 cri.go:89] found id: ""
	I1217 12:02:44.843658 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.843667 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:44.843674 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:44.843768 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:44.868589 3219848 cri.go:89] found id: ""
	I1217 12:02:44.868612 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.868620 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:44.868626 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:44.868710 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:44.893731 3219848 cri.go:89] found id: ""
	I1217 12:02:44.893757 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.893767 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:44.893774 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:44.893877 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:44.920703 3219848 cri.go:89] found id: ""
	I1217 12:02:44.920732 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.920741 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:44.920748 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:44.920807 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:44.945270 3219848 cri.go:89] found id: ""
	I1217 12:02:44.945307 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.945317 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:44.945323 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:44.945390 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:44.974571 3219848 cri.go:89] found id: ""
	I1217 12:02:44.974669 3219848 logs.go:282] 0 containers: []
	W1217 12:02:44.974693 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:44.974723 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:44.974767 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:45.011160 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:45.011262 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:45.135210 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:45.135297 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:45.172030 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:45.172125 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:45.299181 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:45.286225    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.288700    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.289610    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.291554    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.292270    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:45.286225    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.288700    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.289610    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.291554    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:45.292270    3904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:45.299256 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:45.299270 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:47.834408 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:47.845640 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:47.845713 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:47.875767 3219848 cri.go:89] found id: ""
	I1217 12:02:47.875793 3219848 logs.go:282] 0 containers: []
	W1217 12:02:47.875803 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:47.875809 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:47.875894 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:47.900760 3219848 cri.go:89] found id: ""
	I1217 12:02:47.900798 3219848 logs.go:282] 0 containers: []
	W1217 12:02:47.900808 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:47.900815 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:47.900916 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:47.925606 3219848 cri.go:89] found id: ""
	I1217 12:02:47.925640 3219848 logs.go:282] 0 containers: []
	W1217 12:02:47.925650 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:47.925656 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:47.925730 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:47.953896 3219848 cri.go:89] found id: ""
	I1217 12:02:47.953919 3219848 logs.go:282] 0 containers: []
	W1217 12:02:47.953928 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:47.953935 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:47.954003 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:47.979667 3219848 cri.go:89] found id: ""
	I1217 12:02:47.979736 3219848 logs.go:282] 0 containers: []
	W1217 12:02:47.979759 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:47.979780 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:47.979871 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:48.009398 3219848 cri.go:89] found id: ""
	I1217 12:02:48.009477 3219848 logs.go:282] 0 containers: []
	W1217 12:02:48.009502 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:48.009528 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:48.009630 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:48.039277 3219848 cri.go:89] found id: ""
	I1217 12:02:48.039349 3219848 logs.go:282] 0 containers: []
	W1217 12:02:48.039373 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:48.039400 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:48.039498 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:48.065115 3219848 cri.go:89] found id: ""
	I1217 12:02:48.065140 3219848 logs.go:282] 0 containers: []
	W1217 12:02:48.065151 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:48.065162 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:48.065175 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:48.081650 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:48.081680 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:48.149022 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:48.140864    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.141345    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.142918    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.143402    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.144920    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:48.140864    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.141345    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.142918    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.143402    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:48.144920    4004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:48.149046 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:48.149060 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:48.174962 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:48.174999 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:48.204617 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:48.204645 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:50.772582 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:50.784158 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:50.784228 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:50.814532 3219848 cri.go:89] found id: ""
	I1217 12:02:50.814555 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.814563 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:50.814569 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:50.814628 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:50.848966 3219848 cri.go:89] found id: ""
	I1217 12:02:50.848989 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.848997 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:50.849004 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:50.849066 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:50.873257 3219848 cri.go:89] found id: ""
	I1217 12:02:50.873284 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.873293 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:50.873300 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:50.873364 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:50.897538 3219848 cri.go:89] found id: ""
	I1217 12:02:50.897564 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.897573 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:50.897579 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:50.897638 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:50.922912 3219848 cri.go:89] found id: ""
	I1217 12:02:50.922937 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.922946 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:50.922953 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:50.923013 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:50.948094 3219848 cri.go:89] found id: ""
	I1217 12:02:50.948120 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.948129 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:50.948136 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:50.948196 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:50.974087 3219848 cri.go:89] found id: ""
	I1217 12:02:50.974114 3219848 logs.go:282] 0 containers: []
	W1217 12:02:50.974124 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:50.974131 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:50.974190 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:51.006127 3219848 cri.go:89] found id: ""
	I1217 12:02:51.006159 3219848 logs.go:282] 0 containers: []
	W1217 12:02:51.006169 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:51.006256 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:51.006275 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:51.032290 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:51.032323 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:51.063443 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:51.063469 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:51.119487 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:51.119523 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:51.138001 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:51.138031 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:51.208764 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:51.200371    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.201009    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.202548    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.202968    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.204568    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:51.200371    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.201009    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.202548    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.202968    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:51.204568    4135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:53.709691 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:53.720597 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:53.720678 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:53.745776 3219848 cri.go:89] found id: ""
	I1217 12:02:53.745802 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.745811 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:53.745819 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:53.745878 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:53.775989 3219848 cri.go:89] found id: ""
	I1217 12:02:53.776013 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.776021 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:53.776027 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:53.776098 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:53.810226 3219848 cri.go:89] found id: ""
	I1217 12:02:53.810253 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.810262 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:53.810269 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:53.810333 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:53.839758 3219848 cri.go:89] found id: ""
	I1217 12:02:53.839778 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.839787 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:53.839793 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:53.839857 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:53.864680 3219848 cri.go:89] found id: ""
	I1217 12:02:53.864745 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.864768 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:53.864788 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:53.864872 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:53.888540 3219848 cri.go:89] found id: ""
	I1217 12:02:53.888561 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.888569 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:53.888576 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:53.888640 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:53.912908 3219848 cri.go:89] found id: ""
	I1217 12:02:53.912973 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.912998 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:53.913015 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:53.913087 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:53.942233 3219848 cri.go:89] found id: ""
	I1217 12:02:53.942254 3219848 logs.go:282] 0 containers: []
	W1217 12:02:53.942263 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:53.942285 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:53.942300 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:53.998450 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:53.998485 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:54.017836 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:54.017867 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:54.086072 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:54.077439    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.078327    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.079921    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.080399    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.082101    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:54.077439    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.078327    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.079921    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.080399    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:54.082101    4234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:54.086097 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:54.086110 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:54.112391 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:54.112586 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:56.648110 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:56.658791 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:56.658863 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:56.685484 3219848 cri.go:89] found id: ""
	I1217 12:02:56.685508 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.685516 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:56.685526 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:56.685587 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:56.710064 3219848 cri.go:89] found id: ""
	I1217 12:02:56.710126 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.710141 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:56.710148 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:56.710219 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:56.735357 3219848 cri.go:89] found id: ""
	I1217 12:02:56.735383 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.735393 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:56.735404 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:56.735465 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:56.767684 3219848 cri.go:89] found id: ""
	I1217 12:02:56.767710 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.767724 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:56.767731 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:56.767792 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:56.809924 3219848 cri.go:89] found id: ""
	I1217 12:02:56.809951 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.809960 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:56.809968 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:56.810026 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:56.839853 3219848 cri.go:89] found id: ""
	I1217 12:02:56.839879 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.839889 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:56.839895 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:56.839956 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:56.866637 3219848 cri.go:89] found id: ""
	I1217 12:02:56.866663 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.866672 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:56.866679 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:56.866746 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:56.891828 3219848 cri.go:89] found id: ""
	I1217 12:02:56.891853 3219848 logs.go:282] 0 containers: []
	W1217 12:02:56.891862 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:56.891872 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:56.891885 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:56.948612 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:56.948652 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:56.964832 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:56.964864 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:57.035706 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:57.026894    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.027527    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.029280    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.030006    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.031607    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:57.026894    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.027527    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.029280    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.030006    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:57.031607    4348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:57.035725 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:57.035783 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:02:57.061297 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:02:57.061332 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:02:59.592887 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:02:59.603568 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:02:59.603647 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:02:59.628351 3219848 cri.go:89] found id: ""
	I1217 12:02:59.628378 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.628387 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:02:59.628395 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:02:59.628503 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:02:59.654358 3219848 cri.go:89] found id: ""
	I1217 12:02:59.654380 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.654388 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:02:59.654394 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:02:59.654456 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:02:59.679684 3219848 cri.go:89] found id: ""
	I1217 12:02:59.679703 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.679717 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:02:59.679723 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:02:59.679786 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:02:59.706460 3219848 cri.go:89] found id: ""
	I1217 12:02:59.706491 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.706501 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:02:59.706507 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:02:59.706570 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:02:59.736016 3219848 cri.go:89] found id: ""
	I1217 12:02:59.736041 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.736050 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:02:59.736057 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:02:59.736116 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:02:59.778297 3219848 cri.go:89] found id: ""
	I1217 12:02:59.778323 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.778332 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:02:59.778339 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:02:59.778404 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:02:59.809983 3219848 cri.go:89] found id: ""
	I1217 12:02:59.810009 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.810018 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:02:59.810025 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:02:59.810082 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:02:59.843076 3219848 cri.go:89] found id: ""
	I1217 12:02:59.843102 3219848 logs.go:282] 0 containers: []
	W1217 12:02:59.843110 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:02:59.843119 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:02:59.843131 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:02:59.902975 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:02:59.903012 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:02:59.918923 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:02:59.918958 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:02:59.987681 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:02:59.979645    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.980298    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.981764    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.982249    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.983739    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:02:59.979645    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.980298    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.981764    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.982249    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:02:59.983739    4455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:02:59.987704 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:02:59.987716 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:00.126179 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:00.128746 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:02.747342 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:02.759443 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:02.759536 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:02.813879 3219848 cri.go:89] found id: ""
	I1217 12:03:02.813907 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.813917 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:02.813924 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:02.813996 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:02.856869 3219848 cri.go:89] found id: ""
	I1217 12:03:02.856899 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.856908 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:02.856915 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:02.856973 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:02.883984 3219848 cri.go:89] found id: ""
	I1217 12:03:02.884015 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.884024 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:02.884031 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:02.884094 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:02.911584 3219848 cri.go:89] found id: ""
	I1217 12:03:02.911605 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.911613 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:02.911619 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:02.911677 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:02.941815 3219848 cri.go:89] found id: ""
	I1217 12:03:02.941837 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.941847 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:02.941853 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:02.941920 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:02.971949 3219848 cri.go:89] found id: ""
	I1217 12:03:02.971972 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.971980 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:02.971986 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:02.972045 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:02.997848 3219848 cri.go:89] found id: ""
	I1217 12:03:02.997875 3219848 logs.go:282] 0 containers: []
	W1217 12:03:02.997884 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:02.997891 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:02.997952 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:03.025293 3219848 cri.go:89] found id: ""
	I1217 12:03:03.025321 3219848 logs.go:282] 0 containers: []
	W1217 12:03:03.025330 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:03.025339 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:03.025353 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:03.095479 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:03.086357    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.087966    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.088719    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.089902    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.090320    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:03.086357    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.087966    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.088719    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.089902    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:03.090320    4562 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:03.095503 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:03.095517 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:03.121627 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:03.121668 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:03.152132 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:03.152162 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:03.208671 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:03.208717 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:05.726193 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:05.737765 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:05.737842 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:05.803315 3219848 cri.go:89] found id: ""
	I1217 12:03:05.803338 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.803355 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:05.803364 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:05.803424 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:05.852889 3219848 cri.go:89] found id: ""
	I1217 12:03:05.852952 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.852967 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:05.852975 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:05.853035 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:05.885239 3219848 cri.go:89] found id: ""
	I1217 12:03:05.885263 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.885274 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:05.885281 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:05.885346 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:05.909571 3219848 cri.go:89] found id: ""
	I1217 12:03:05.909601 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.909610 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:05.909617 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:05.909683 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:05.944648 3219848 cri.go:89] found id: ""
	I1217 12:03:05.944714 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.944729 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:05.944742 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:05.944801 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:05.969671 3219848 cri.go:89] found id: ""
	I1217 12:03:05.969707 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.969716 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:05.969738 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:05.969819 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:05.994549 3219848 cri.go:89] found id: ""
	I1217 12:03:05.994575 3219848 logs.go:282] 0 containers: []
	W1217 12:03:05.994584 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:05.994590 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:05.994648 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:06.025175 3219848 cri.go:89] found id: ""
	I1217 12:03:06.025201 3219848 logs.go:282] 0 containers: []
	W1217 12:03:06.025212 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:06.025223 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:06.025255 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:06.094463 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:06.085807    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.086594    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.088396    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.089018    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.090252    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:06.085807    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.086594    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.088396    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.089018    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:06.090252    4677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:06.094488 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:06.094503 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:06.120857 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:06.120892 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:06.148825 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:06.148854 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:06.207501 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:06.207537 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:08.724013 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:08.734763 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:08.734854 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:08.797461 3219848 cri.go:89] found id: ""
	I1217 12:03:08.797536 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.797561 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:08.797583 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:08.797692 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:08.849950 3219848 cri.go:89] found id: ""
	I1217 12:03:08.850015 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.850031 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:08.850039 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:08.850099 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:08.876353 3219848 cri.go:89] found id: ""
	I1217 12:03:08.876378 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.876387 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:08.876394 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:08.876474 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:08.902743 3219848 cri.go:89] found id: ""
	I1217 12:03:08.902767 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.902776 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:08.902783 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:08.902847 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:08.928380 3219848 cri.go:89] found id: ""
	I1217 12:03:08.928405 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.928439 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:08.928447 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:08.928508 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:08.953372 3219848 cri.go:89] found id: ""
	I1217 12:03:08.953397 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.953406 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:08.953413 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:08.953481 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:08.977913 3219848 cri.go:89] found id: ""
	I1217 12:03:08.977935 3219848 logs.go:282] 0 containers: []
	W1217 12:03:08.977945 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:08.977951 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:08.978015 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:09.014088 3219848 cri.go:89] found id: ""
	I1217 12:03:09.014114 3219848 logs.go:282] 0 containers: []
	W1217 12:03:09.014123 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:09.014133 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:09.014144 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:09.069559 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:09.069599 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:09.085849 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:09.085877 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:09.153859 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:09.145727    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.146529    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.148157    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.148779    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.150028    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:09.145727    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.146529    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.148157    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.148779    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:09.150028    4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:09.153879 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:09.153892 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:09.179067 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:09.179099 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:11.708448 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:11.719221 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:11.719291 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:11.744009 3219848 cri.go:89] found id: ""
	I1217 12:03:11.744033 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.744042 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:11.744048 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:11.744104 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:11.795640 3219848 cri.go:89] found id: ""
	I1217 12:03:11.795663 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.795671 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:11.795678 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:11.795739 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:11.851553 3219848 cri.go:89] found id: ""
	I1217 12:03:11.851573 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.851581 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:11.851587 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:11.851642 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:11.879197 3219848 cri.go:89] found id: ""
	I1217 12:03:11.879272 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.879294 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:11.879316 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:11.879432 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:11.904743 3219848 cri.go:89] found id: ""
	I1217 12:03:11.904816 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.904839 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:11.904864 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:11.904974 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:11.930378 3219848 cri.go:89] found id: ""
	I1217 12:03:11.930452 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.930482 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:11.930491 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:11.930562 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:11.955446 3219848 cri.go:89] found id: ""
	I1217 12:03:11.955475 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.955485 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:11.955492 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:11.955553 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:11.980056 3219848 cri.go:89] found id: ""
	I1217 12:03:11.980082 3219848 logs.go:282] 0 containers: []
	W1217 12:03:11.980092 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:11.980102 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:11.980113 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:12.039392 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:12.039430 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:12.055724 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:12.055752 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:12.120835 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:12.111964    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.112751    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.114462    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.114770    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.116985    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:12.111964    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.112751    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.114462    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.114770    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:12.116985    4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:12.120858 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:12.120871 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:12.145568 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:12.145601 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:14.685252 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:14.695909 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:14.695982 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:14.722094 3219848 cri.go:89] found id: ""
	I1217 12:03:14.722116 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.722124 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:14.722131 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:14.722191 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:14.747765 3219848 cri.go:89] found id: ""
	I1217 12:03:14.747790 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.747799 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:14.747805 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:14.747863 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:14.832061 3219848 cri.go:89] found id: ""
	I1217 12:03:14.832086 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.832096 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:14.832103 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:14.832175 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:14.861589 3219848 cri.go:89] found id: ""
	I1217 12:03:14.861612 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.861621 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:14.861628 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:14.861687 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:14.887122 3219848 cri.go:89] found id: ""
	I1217 12:03:14.887144 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.887153 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:14.887160 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:14.887219 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:14.913961 3219848 cri.go:89] found id: ""
	I1217 12:03:14.913988 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.913996 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:14.914003 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:14.914063 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:14.940509 3219848 cri.go:89] found id: ""
	I1217 12:03:14.940539 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.940584 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:14.940599 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:14.940684 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:14.968190 3219848 cri.go:89] found id: ""
	I1217 12:03:14.968260 3219848 logs.go:282] 0 containers: []
	W1217 12:03:14.968286 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:14.968314 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:14.968341 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:15.025687 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:15.025728 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:15.048063 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:15.048161 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:15.120549 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:15.111260    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.111932    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.113791    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.114487    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.116204    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:15.111260    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.111932    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.113791    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.114487    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:15.116204    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:15.120575 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:15.120590 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:15.147374 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:15.147419 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:17.678613 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:17.689902 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:17.689996 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:17.715580 3219848 cri.go:89] found id: ""
	I1217 12:03:17.715617 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.715626 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:17.715634 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:17.715706 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:17.746656 3219848 cri.go:89] found id: ""
	I1217 12:03:17.746680 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.746689 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:17.746696 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:17.746757 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:17.777911 3219848 cri.go:89] found id: ""
	I1217 12:03:17.777981 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.778005 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:17.778031 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:17.778142 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:17.841621 3219848 cri.go:89] found id: ""
	I1217 12:03:17.841682 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.841714 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:17.841734 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:17.841839 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:17.874462 3219848 cri.go:89] found id: ""
	I1217 12:03:17.874536 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.874559 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:17.874573 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:17.874655 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:17.899519 3219848 cri.go:89] found id: ""
	I1217 12:03:17.899563 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.899573 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:17.899580 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:17.899654 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:17.925535 3219848 cri.go:89] found id: ""
	I1217 12:03:17.925559 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.925568 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:17.925574 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:17.925642 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:17.950672 3219848 cri.go:89] found id: ""
	I1217 12:03:17.950737 3219848 logs.go:282] 0 containers: []
	W1217 12:03:17.950761 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:17.950787 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:17.950826 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:18.006915 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:18.006964 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:18.024598 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:18.024632 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:18.093800 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:18.085487    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.086439    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.087176    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.088142    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.089680    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:18.085487    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.086439    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.087176    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.088142    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:18.089680    5143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:18.093830 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:18.093843 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:18.120115 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:18.120150 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:20.651699 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:20.662809 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:20.662885 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:20.692750 3219848 cri.go:89] found id: ""
	I1217 12:03:20.692772 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.692781 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:20.692787 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:20.692854 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:20.723234 3219848 cri.go:89] found id: ""
	I1217 12:03:20.723259 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.723267 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:20.723273 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:20.723334 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:20.749812 3219848 cri.go:89] found id: ""
	I1217 12:03:20.749833 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.749841 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:20.749847 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:20.749903 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:20.799186 3219848 cri.go:89] found id: ""
	I1217 12:03:20.799208 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.799216 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:20.799222 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:20.799280 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:20.850498 3219848 cri.go:89] found id: ""
	I1217 12:03:20.850573 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.850596 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:20.850617 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:20.850735 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:20.881588 3219848 cri.go:89] found id: ""
	I1217 12:03:20.881660 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.881682 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:20.881702 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:20.881790 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:20.911209 3219848 cri.go:89] found id: ""
	I1217 12:03:20.911275 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.911301 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:20.911316 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:20.911391 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:20.938447 3219848 cri.go:89] found id: ""
	I1217 12:03:20.938473 3219848 logs.go:282] 0 containers: []
	W1217 12:03:20.938483 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:20.938492 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:20.938503 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:20.995421 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:20.995463 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:21.013450 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:21.013483 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:21.084404 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:21.075746    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.076533    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.078205    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.078900    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.080479    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:21.075746    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.076533    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.078205    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.078900    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:21.080479    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:21.084449 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:21.084463 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:21.111296 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:21.111335 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:23.647949 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:23.658668 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:23.658737 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:23.685275 3219848 cri.go:89] found id: ""
	I1217 12:03:23.685298 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.685307 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:23.685314 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:23.685375 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:23.711416 3219848 cri.go:89] found id: ""
	I1217 12:03:23.711466 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.711478 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:23.711485 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:23.711549 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:23.738391 3219848 cri.go:89] found id: ""
	I1217 12:03:23.738418 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.738427 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:23.738433 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:23.738492 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:23.801227 3219848 cri.go:89] found id: ""
	I1217 12:03:23.801253 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.801262 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:23.801268 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:23.801327 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:23.837564 3219848 cri.go:89] found id: ""
	I1217 12:03:23.837585 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.837593 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:23.837600 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:23.837660 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:23.864057 3219848 cri.go:89] found id: ""
	I1217 12:03:23.864078 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.864086 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:23.864093 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:23.864159 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:23.888263 3219848 cri.go:89] found id: ""
	I1217 12:03:23.888289 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.888298 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:23.888305 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:23.888363 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:23.917533 3219848 cri.go:89] found id: ""
	I1217 12:03:23.917555 3219848 logs.go:282] 0 containers: []
	W1217 12:03:23.917564 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:23.917573 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:23.917584 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:23.946496 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:23.946525 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:24.003650 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:24.003697 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:24.022449 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:24.022482 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:24.093823 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:24.084998    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.085736    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.087440    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.088190    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.089867    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:24.084998    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.085736    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.087440    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.088190    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:24.089867    5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:24.093845 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:24.093858 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:26.622844 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:26.634100 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:26.634173 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:26.662315 3219848 cri.go:89] found id: ""
	I1217 12:03:26.662341 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.662350 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:26.662357 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:26.662417 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:26.689598 3219848 cri.go:89] found id: ""
	I1217 12:03:26.689623 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.689633 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:26.689640 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:26.689704 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:26.716815 3219848 cri.go:89] found id: ""
	I1217 12:03:26.716841 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.716850 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:26.716858 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:26.716926 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:26.743338 3219848 cri.go:89] found id: ""
	I1217 12:03:26.743364 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.743375 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:26.743382 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:26.743447 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:26.799290 3219848 cri.go:89] found id: ""
	I1217 12:03:26.799326 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.799335 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:26.799342 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:26.799412 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:26.854473 3219848 cri.go:89] found id: ""
	I1217 12:03:26.854539 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.854555 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:26.854563 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:26.854625 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:26.880552 3219848 cri.go:89] found id: ""
	I1217 12:03:26.880581 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.880591 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:26.880598 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:26.880659 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:26.906009 3219848 cri.go:89] found id: ""
	I1217 12:03:26.906042 3219848 logs.go:282] 0 containers: []
	W1217 12:03:26.906052 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:26.906061 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:26.906072 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:26.971795 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:26.963736    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.964328    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.965821    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.966197    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.967699    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:26.963736    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.964328    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.965821    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.966197    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:26.967699    5476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:26.971818 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:26.971831 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:26.996929 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:26.996968 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:27.031442 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:27.031479 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:27.088296 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:27.088330 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:29.604978 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:29.615685 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:29.615754 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:29.642346 3219848 cri.go:89] found id: ""
	I1217 12:03:29.642375 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.642384 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:29.642391 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:29.642449 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:29.669188 3219848 cri.go:89] found id: ""
	I1217 12:03:29.669214 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.669223 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:29.669230 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:29.669293 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:29.695623 3219848 cri.go:89] found id: ""
	I1217 12:03:29.695648 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.695657 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:29.695663 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:29.695729 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:29.721447 3219848 cri.go:89] found id: ""
	I1217 12:03:29.721472 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.721482 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:29.721489 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:29.721551 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:29.746217 3219848 cri.go:89] found id: ""
	I1217 12:03:29.746244 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.746253 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:29.746261 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:29.746318 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:29.797088 3219848 cri.go:89] found id: ""
	I1217 12:03:29.797122 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.797131 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:29.797137 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:29.797210 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:29.845942 3219848 cri.go:89] found id: ""
	I1217 12:03:29.845962 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.845971 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:29.845977 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:29.846041 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:29.881686 3219848 cri.go:89] found id: ""
	I1217 12:03:29.881714 3219848 logs.go:282] 0 containers: []
	W1217 12:03:29.881723 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:29.881733 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:29.881745 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:29.938916 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:29.938949 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:29.954625 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:29.954702 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:30.048700 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:30.033826    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.034802    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.036344    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.036964    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.039023    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:30.033826    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.034802    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.036344    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.036964    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:30.039023    5597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:30.048776 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:30.048805 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:30.081544 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:30.081588 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:32.617502 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:32.628255 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:32.628328 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:32.653287 3219848 cri.go:89] found id: ""
	I1217 12:03:32.653314 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.653323 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:32.653331 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:32.653393 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:32.678914 3219848 cri.go:89] found id: ""
	I1217 12:03:32.678938 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.678946 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:32.678952 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:32.679013 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:32.705809 3219848 cri.go:89] found id: ""
	I1217 12:03:32.705835 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.705845 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:32.705852 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:32.705915 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:32.736249 3219848 cri.go:89] found id: ""
	I1217 12:03:32.736278 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.736294 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:32.736301 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:32.736382 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:32.777637 3219848 cri.go:89] found id: ""
	I1217 12:03:32.777666 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.777676 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:32.777684 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:32.777749 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:32.848686 3219848 cri.go:89] found id: ""
	I1217 12:03:32.848726 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.848735 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:32.848742 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:32.848811 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:32.877608 3219848 cri.go:89] found id: ""
	I1217 12:03:32.877633 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.877643 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:32.877650 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:32.877715 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:32.912387 3219848 cri.go:89] found id: ""
	I1217 12:03:32.912443 3219848 logs.go:282] 0 containers: []
	W1217 12:03:32.912453 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:32.912463 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:32.912478 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:32.973780 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:32.965664    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.966474    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.968080    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.968441    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.969916    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:32.965664    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.966474    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.968080    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.968441    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:32.969916    5702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:32.973802 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:32.973816 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:32.999779 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:32.999818 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:33.035424 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:33.035456 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:33.095096 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:33.095136 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:35.611791 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:35.625472 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:35.625546 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:35.656243 3219848 cri.go:89] found id: ""
	I1217 12:03:35.656265 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.656273 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:35.656280 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:35.656339 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:35.681938 3219848 cri.go:89] found id: ""
	I1217 12:03:35.681964 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.681972 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:35.681978 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:35.682038 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:35.711864 3219848 cri.go:89] found id: ""
	I1217 12:03:35.711887 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.711896 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:35.711902 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:35.711961 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:35.736900 3219848 cri.go:89] found id: ""
	I1217 12:03:35.736924 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.736932 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:35.736942 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:35.737002 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:35.796476 3219848 cri.go:89] found id: ""
	I1217 12:03:35.796553 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.796576 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:35.796598 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:35.796711 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:35.851385 3219848 cri.go:89] found id: ""
	I1217 12:03:35.851463 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.851487 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:35.851530 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:35.851627 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:35.879315 3219848 cri.go:89] found id: ""
	I1217 12:03:35.879388 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.879423 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:35.879447 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:35.879560 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:35.904369 3219848 cri.go:89] found id: ""
	I1217 12:03:35.904461 3219848 logs.go:282] 0 containers: []
	W1217 12:03:35.904485 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:35.904509 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:35.904539 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:35.962316 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:35.962358 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:35.978473 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:35.978503 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:36.048228 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:36.039946    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.040655    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.042240    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.042853    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.043967    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:36.039946    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.040655    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.042240    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.042853    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:36.043967    5818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:36.048254 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:36.048267 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:36.075099 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:36.075134 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:38.607418 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:38.618789 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:38.618869 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:38.646270 3219848 cri.go:89] found id: ""
	I1217 12:03:38.646297 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.646307 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:38.646315 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:38.646379 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:38.671906 3219848 cri.go:89] found id: ""
	I1217 12:03:38.671931 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.671940 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:38.671947 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:38.672012 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:38.696480 3219848 cri.go:89] found id: ""
	I1217 12:03:38.696504 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.696513 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:38.696520 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:38.696581 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:38.727000 3219848 cri.go:89] found id: ""
	I1217 12:03:38.727026 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.727036 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:38.727042 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:38.727114 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:38.782353 3219848 cri.go:89] found id: ""
	I1217 12:03:38.782381 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.782391 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:38.782398 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:38.782459 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:38.847087 3219848 cri.go:89] found id: ""
	I1217 12:03:38.847110 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.847118 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:38.847125 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:38.847183 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:38.874682 3219848 cri.go:89] found id: ""
	I1217 12:03:38.874704 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.874712 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:38.874718 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:38.874780 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:38.902269 3219848 cri.go:89] found id: ""
	I1217 12:03:38.902297 3219848 logs.go:282] 0 containers: []
	W1217 12:03:38.902306 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:38.902316 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:38.902331 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:38.967646 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:38.958671    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.959248    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.961005    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.961508    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.963014    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:38.958671    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.959248    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.961005    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.961508    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:38.963014    5921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:38.967671 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:38.967685 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:38.993086 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:38.993121 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:39.024046 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:39.024079 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:39.080928 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:39.080962 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:41.597202 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:41.608508 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:41.608582 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:41.634319 3219848 cri.go:89] found id: ""
	I1217 12:03:41.634344 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.634359 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:41.634366 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:41.634427 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:41.660053 3219848 cri.go:89] found id: ""
	I1217 12:03:41.660076 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.660085 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:41.660092 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:41.660159 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:41.686022 3219848 cri.go:89] found id: ""
	I1217 12:03:41.686047 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.686056 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:41.686062 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:41.686119 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:41.711689 3219848 cri.go:89] found id: ""
	I1217 12:03:41.711714 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.711723 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:41.711729 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:41.711798 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:41.738135 3219848 cri.go:89] found id: ""
	I1217 12:03:41.738161 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.738170 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:41.738177 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:41.738235 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:41.794953 3219848 cri.go:89] found id: ""
	I1217 12:03:41.794975 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.794984 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:41.794991 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:41.795051 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:41.832712 3219848 cri.go:89] found id: ""
	I1217 12:03:41.832747 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.832755 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:41.832762 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:41.832872 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:41.862947 3219848 cri.go:89] found id: ""
	I1217 12:03:41.862967 3219848 logs.go:282] 0 containers: []
	W1217 12:03:41.862976 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:41.862985 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:41.862996 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:41.888484 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:41.888519 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:41.919432 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:41.919461 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:41.979083 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:41.979117 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:41.995225 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:41.995256 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:42.068500 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:42.058178    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.059172    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.061060    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.061946    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.063848    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:42.058178    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.059172    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.061060    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.061946    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:42.063848    6053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:44.569152 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:44.579717 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:44.579791 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:44.604579 3219848 cri.go:89] found id: ""
	I1217 12:03:44.604605 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.604614 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:44.604621 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:44.604680 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:44.628954 3219848 cri.go:89] found id: ""
	I1217 12:03:44.628987 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.628997 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:44.629004 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:44.629066 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:44.657345 3219848 cri.go:89] found id: ""
	I1217 12:03:44.657372 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.657381 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:44.657388 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:44.657445 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:44.682960 3219848 cri.go:89] found id: ""
	I1217 12:03:44.682983 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.683000 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:44.683007 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:44.683066 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:44.712406 3219848 cri.go:89] found id: ""
	I1217 12:03:44.712451 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.712461 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:44.712468 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:44.712526 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:44.737929 3219848 cri.go:89] found id: ""
	I1217 12:03:44.737952 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.737961 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:44.737967 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:44.738027 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:44.778893 3219848 cri.go:89] found id: ""
	I1217 12:03:44.778921 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.778930 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:44.778938 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:44.779003 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:44.818695 3219848 cri.go:89] found id: ""
	I1217 12:03:44.818724 3219848 logs.go:282] 0 containers: []
	W1217 12:03:44.818733 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:44.818742 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:44.818754 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:44.888711 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:44.888748 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:44.905193 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:44.905224 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:44.969126 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:44.960653    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.961469    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.963160    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.963503    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.964997    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:44.960653    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.961469    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.963160    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.963503    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:44.964997    6154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:44.969149 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:44.969162 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:44.995233 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:44.995272 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:47.580853 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:47.591106 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:47.591173 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:47.616262 3219848 cri.go:89] found id: ""
	I1217 12:03:47.616294 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.616304 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:47.616317 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:47.616384 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:47.641674 3219848 cri.go:89] found id: ""
	I1217 12:03:47.641702 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.641712 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:47.641718 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:47.641778 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:47.667191 3219848 cri.go:89] found id: ""
	I1217 12:03:47.667215 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.667224 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:47.667230 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:47.667296 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:47.696304 3219848 cri.go:89] found id: ""
	I1217 12:03:47.696332 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.696341 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:47.696349 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:47.696412 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:47.726109 3219848 cri.go:89] found id: ""
	I1217 12:03:47.726134 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.726143 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:47.726149 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:47.726212 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:47.762878 3219848 cri.go:89] found id: ""
	I1217 12:03:47.762904 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.762914 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:47.762920 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:47.762977 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:47.824894 3219848 cri.go:89] found id: ""
	I1217 12:03:47.824932 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.824957 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:47.824973 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:47.825056 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:47.851816 3219848 cri.go:89] found id: ""
	I1217 12:03:47.851852 3219848 logs.go:282] 0 containers: []
	W1217 12:03:47.851861 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:47.851888 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:47.851907 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:47.908314 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:47.908352 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:47.924222 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:47.924250 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:47.986251 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:47.978126    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.978646    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.980334    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.980816    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.982319    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:47.978126    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.978646    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.980334    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.980816    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:47.982319    6267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:47.986276 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:47.986290 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:48.010815 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:48.010855 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:50.542164 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:50.553364 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:50.553437 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:50.581389 3219848 cri.go:89] found id: ""
	I1217 12:03:50.581423 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.581432 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:50.581439 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:50.581508 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:50.610382 3219848 cri.go:89] found id: ""
	I1217 12:03:50.610405 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.610413 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:50.610422 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:50.610482 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:50.636111 3219848 cri.go:89] found id: ""
	I1217 12:03:50.636137 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.636147 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:50.636153 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:50.636218 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:50.661308 3219848 cri.go:89] found id: ""
	I1217 12:03:50.661334 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.661342 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:50.661350 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:50.661415 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:50.688144 3219848 cri.go:89] found id: ""
	I1217 12:03:50.688172 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.688181 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:50.688187 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:50.688251 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:50.715059 3219848 cri.go:89] found id: ""
	I1217 12:03:50.715087 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.715096 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:50.715103 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:50.715165 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:50.745229 3219848 cri.go:89] found id: ""
	I1217 12:03:50.745253 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.745262 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:50.745269 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:50.745330 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:50.793705 3219848 cri.go:89] found id: ""
	I1217 12:03:50.793735 3219848 logs.go:282] 0 containers: []
	W1217 12:03:50.793743 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:50.793752 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:50.793763 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:50.876190 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:50.876229 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:50.893552 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:50.893581 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:50.960907 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:50.951439    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.952410    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.954030    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.954408    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.956833    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:50.951439    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.952410    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.954030    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.954408    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:50.956833    6377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:50.960928 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:50.960942 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:50.986454 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:50.986485 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:53.522123 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:53.533167 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:53.533246 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:53.558553 3219848 cri.go:89] found id: ""
	I1217 12:03:53.558580 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.558589 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:53.558596 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:53.558668 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:53.586267 3219848 cri.go:89] found id: ""
	I1217 12:03:53.586295 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.586305 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:53.586318 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:53.586383 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:53.613148 3219848 cri.go:89] found id: ""
	I1217 12:03:53.613174 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.613183 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:53.613190 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:53.613251 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:53.639336 3219848 cri.go:89] found id: ""
	I1217 12:03:53.639371 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.639381 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:53.639387 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:53.639452 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:53.664632 3219848 cri.go:89] found id: ""
	I1217 12:03:53.664700 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.664730 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:53.664745 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:53.664820 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:53.689663 3219848 cri.go:89] found id: ""
	I1217 12:03:53.689733 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.689760 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:53.689774 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:53.689851 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:53.714636 3219848 cri.go:89] found id: ""
	I1217 12:03:53.714707 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.714733 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:53.714747 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:53.714827 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:53.744583 3219848 cri.go:89] found id: ""
	I1217 12:03:53.744610 3219848 logs.go:282] 0 containers: []
	W1217 12:03:53.744620 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:53.744629 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:53.744640 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:53.833845 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:53.833884 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:53.853606 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:53.853632 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:53.921245 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:53.912685    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.913171    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.914992    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.915543    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.917157    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:53.912685    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.913171    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.914992    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.915543    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:53.917157    6492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:53.921269 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:53.921282 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:53.946578 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:53.946611 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:56.477034 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:56.488539 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:56.488622 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:56.514320 3219848 cri.go:89] found id: ""
	I1217 12:03:56.514347 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.514356 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:56.514363 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:56.514426 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:56.540629 3219848 cri.go:89] found id: ""
	I1217 12:03:56.540668 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.540676 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:56.540687 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:56.540752 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:56.571552 3219848 cri.go:89] found id: ""
	I1217 12:03:56.571586 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.571595 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:56.571602 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:56.571725 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:56.598758 3219848 cri.go:89] found id: ""
	I1217 12:03:56.598835 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.598858 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:56.598878 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:56.598964 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:56.624629 3219848 cri.go:89] found id: ""
	I1217 12:03:56.624659 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.624668 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:56.624675 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:56.624736 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:56.650192 3219848 cri.go:89] found id: ""
	I1217 12:03:56.650214 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.650222 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:56.650229 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:56.650286 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:56.675523 3219848 cri.go:89] found id: ""
	I1217 12:03:56.675548 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.675557 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:56.675563 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:56.675651 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:56.701703 3219848 cri.go:89] found id: ""
	I1217 12:03:56.701731 3219848 logs.go:282] 0 containers: []
	W1217 12:03:56.701740 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:56.701751 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:56.701762 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:56.717844 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:56.717877 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:56.837097 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:56.823600    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.824395    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.827077    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.827764    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.829468    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:56.823600    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.824395    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.827077    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.827764    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:56.829468    6597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:56.837160 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:56.837195 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:56.864759 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:56.864792 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:03:56.892589 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:56.892615 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:59.450097 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:03:59.460573 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:03:59.460649 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:03:59.484966 3219848 cri.go:89] found id: ""
	I1217 12:03:59.484992 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.485001 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:03:59.485007 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:03:59.485073 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:03:59.509519 3219848 cri.go:89] found id: ""
	I1217 12:03:59.509545 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.509554 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:03:59.509561 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:03:59.509619 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:03:59.535238 3219848 cri.go:89] found id: ""
	I1217 12:03:59.535307 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.535331 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:03:59.535351 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:03:59.535443 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:03:59.561799 3219848 cri.go:89] found id: ""
	I1217 12:03:59.561823 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.561832 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:03:59.561839 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:03:59.561898 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:03:59.587394 3219848 cri.go:89] found id: ""
	I1217 12:03:59.587416 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.587425 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:03:59.587431 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:03:59.587489 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:03:59.614672 3219848 cri.go:89] found id: ""
	I1217 12:03:59.614695 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.614704 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:03:59.614712 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:03:59.614774 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:03:59.641144 3219848 cri.go:89] found id: ""
	I1217 12:03:59.641171 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.641180 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:03:59.641187 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:03:59.641251 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:03:59.667139 3219848 cri.go:89] found id: ""
	I1217 12:03:59.667167 3219848 logs.go:282] 0 containers: []
	W1217 12:03:59.667176 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:03:59.667184 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:03:59.667196 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:03:59.725056 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:03:59.725091 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:03:59.741510 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:03:59.741593 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:03:59.858554 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:03:59.849895    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.850546    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.852238    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.852841    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.854544    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:03:59.849895    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.850546    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.852238    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.852841    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:03:59.854544    6711 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:03:59.858578 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:03:59.858592 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:03:59.884457 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:03:59.884492 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:02.413040 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:02.426774 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:02.426848 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:02.456484 3219848 cri.go:89] found id: ""
	I1217 12:04:02.456587 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.456601 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:02.456609 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:02.456706 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:02.485434 3219848 cri.go:89] found id: ""
	I1217 12:04:02.485506 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.485531 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:02.485547 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:02.485622 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:02.512063 3219848 cri.go:89] found id: ""
	I1217 12:04:02.512100 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.512109 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:02.512116 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:02.512195 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:02.538362 3219848 cri.go:89] found id: ""
	I1217 12:04:02.538433 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.538454 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:02.538462 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:02.538525 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:02.567959 3219848 cri.go:89] found id: ""
	I1217 12:04:02.567994 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.568003 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:02.568009 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:02.568077 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:02.594823 3219848 cri.go:89] found id: ""
	I1217 12:04:02.594860 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.594869 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:02.594876 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:02.594950 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:02.625125 3219848 cri.go:89] found id: ""
	I1217 12:04:02.625196 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.625211 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:02.625219 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:02.625282 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:02.650998 3219848 cri.go:89] found id: ""
	I1217 12:04:02.651033 3219848 logs.go:282] 0 containers: []
	W1217 12:04:02.651042 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:02.651051 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:02.651062 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:02.676950 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:02.676984 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:02.711118 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:02.711144 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:02.774152 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:02.774233 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:02.794787 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:02.794862 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:02.886703 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:02.878272    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.878713    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.880270    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.880830    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.882492    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:02.878272    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.878713    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.880270    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.880830    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:02.882492    6841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:05.386993 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:05.398225 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:05.398299 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:05.426294 3219848 cri.go:89] found id: ""
	I1217 12:04:05.426321 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.426330 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:05.426337 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:05.426399 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:05.451004 3219848 cri.go:89] found id: ""
	I1217 12:04:05.451027 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.451036 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:05.451049 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:05.451112 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:05.476504 3219848 cri.go:89] found id: ""
	I1217 12:04:05.476532 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.476542 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:05.476549 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:05.476607 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:05.506001 3219848 cri.go:89] found id: ""
	I1217 12:04:05.506028 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.506036 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:05.506043 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:05.506103 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:05.531776 3219848 cri.go:89] found id: ""
	I1217 12:04:05.531803 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.531813 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:05.531820 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:05.531878 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:05.558040 3219848 cri.go:89] found id: ""
	I1217 12:04:05.558068 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.558078 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:05.558085 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:05.558149 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:05.582988 3219848 cri.go:89] found id: ""
	I1217 12:04:05.583024 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.583033 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:05.583040 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:05.583115 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:05.609687 3219848 cri.go:89] found id: ""
	I1217 12:04:05.609725 3219848 logs.go:282] 0 containers: []
	W1217 12:04:05.609734 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:05.609744 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:05.609756 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:05.677594 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:05.668798    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.669411    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.671028    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.671605    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.673145    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:05.668798    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.669411    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.671028    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.671605    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:05.673145    6931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:05.677661 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:05.677689 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:05.704024 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:05.704062 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:05.736880 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:05.736906 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:05.810417 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:05.810457 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:08.343493 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:08.353931 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:08.354001 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:08.377982 3219848 cri.go:89] found id: ""
	I1217 12:04:08.378050 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.378062 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:08.378069 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:08.378160 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:08.402837 3219848 cri.go:89] found id: ""
	I1217 12:04:08.402870 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.402880 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:08.402886 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:08.402956 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:08.430641 3219848 cri.go:89] found id: ""
	I1217 12:04:08.430666 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.430675 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:08.430682 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:08.430747 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:08.455904 3219848 cri.go:89] found id: ""
	I1217 12:04:08.455937 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.455947 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:08.455954 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:08.456020 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:08.480357 3219848 cri.go:89] found id: ""
	I1217 12:04:08.480388 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.480398 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:08.480405 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:08.480506 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:08.505595 3219848 cri.go:89] found id: ""
	I1217 12:04:08.505629 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.505682 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:08.505701 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:08.505765 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:08.531028 3219848 cri.go:89] found id: ""
	I1217 12:04:08.531065 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.531074 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:08.531081 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:08.531156 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:08.559015 3219848 cri.go:89] found id: ""
	I1217 12:04:08.559051 3219848 logs.go:282] 0 containers: []
	W1217 12:04:08.559060 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:08.559069 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:08.559081 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:08.574853 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:08.574883 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:08.640119 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:08.631556    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.632320    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.634049    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.634630    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.635699    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:08.631556    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.632320    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.634049    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.634630    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:08.635699    7049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:08.640141 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:08.640154 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:08.666054 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:08.666091 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:08.694523 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:08.694553 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:11.260393 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:11.271847 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:11.271939 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:11.297537 3219848 cri.go:89] found id: ""
	I1217 12:04:11.297559 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.297568 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:11.297574 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:11.297669 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:11.326252 3219848 cri.go:89] found id: ""
	I1217 12:04:11.326279 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.326288 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:11.326295 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:11.326354 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:11.354965 3219848 cri.go:89] found id: ""
	I1217 12:04:11.354991 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.355013 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:11.355020 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:11.355085 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:11.379623 3219848 cri.go:89] found id: ""
	I1217 12:04:11.379649 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.379657 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:11.379664 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:11.379730 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:11.405089 3219848 cri.go:89] found id: ""
	I1217 12:04:11.405157 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.405185 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:11.405200 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:11.405276 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:11.431039 3219848 cri.go:89] found id: ""
	I1217 12:04:11.431064 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.431073 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:11.431079 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:11.431138 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:11.456294 3219848 cri.go:89] found id: ""
	I1217 12:04:11.456329 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.456338 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:11.456345 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:11.456437 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:11.485568 3219848 cri.go:89] found id: ""
	I1217 12:04:11.485595 3219848 logs.go:282] 0 containers: []
	W1217 12:04:11.485604 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:11.485613 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:11.485628 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:11.542231 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:11.542268 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:11.559119 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:11.559201 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:11.628507 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:11.619906    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.620667    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.622406    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.622904    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.624511    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:11.619906    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.620667    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.622406    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.622904    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:11.624511    7162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:11.628580 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:11.628617 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:11.654658 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:11.654692 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:14.187317 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:14.200950 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:14.201028 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:14.225871 3219848 cri.go:89] found id: ""
	I1217 12:04:14.225907 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.225917 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:14.225924 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:14.225982 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:14.255169 3219848 cri.go:89] found id: ""
	I1217 12:04:14.255194 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.255203 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:14.255210 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:14.255270 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:14.279884 3219848 cri.go:89] found id: ""
	I1217 12:04:14.279914 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.279928 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:14.279935 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:14.279993 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:14.303876 3219848 cri.go:89] found id: ""
	I1217 12:04:14.303902 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.303911 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:14.303918 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:14.303982 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:14.329876 3219848 cri.go:89] found id: ""
	I1217 12:04:14.329902 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.329911 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:14.329924 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:14.329993 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:14.355681 3219848 cri.go:89] found id: ""
	I1217 12:04:14.355707 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.355723 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:14.355730 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:14.355791 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:14.380557 3219848 cri.go:89] found id: ""
	I1217 12:04:14.380582 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.380591 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:14.380607 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:14.380669 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:14.406559 3219848 cri.go:89] found id: ""
	I1217 12:04:14.406626 3219848 logs.go:282] 0 containers: []
	W1217 12:04:14.406652 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:14.406671 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:14.406684 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:14.435535 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:14.435567 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:14.496057 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:14.496100 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:14.512036 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:14.512068 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:14.581215 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:14.571459    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.572243    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.574240    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.574925    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.576493    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:14.571459    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.572243    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.574240    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.574925    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:14.576493    7285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:14.581280 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:14.581299 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:17.108603 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:17.119638 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:17.119710 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:17.144879 3219848 cri.go:89] found id: ""
	I1217 12:04:17.144901 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.144909 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:17.144915 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:17.144976 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:17.169341 3219848 cri.go:89] found id: ""
	I1217 12:04:17.169366 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.169375 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:17.169381 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:17.169440 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:17.193770 3219848 cri.go:89] found id: ""
	I1217 12:04:17.193792 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.193800 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:17.193806 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:17.193867 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:17.218766 3219848 cri.go:89] found id: ""
	I1217 12:04:17.218788 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.218797 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:17.218804 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:17.218911 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:17.246745 3219848 cri.go:89] found id: ""
	I1217 12:04:17.246768 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.246777 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:17.246783 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:17.246844 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:17.271877 3219848 cri.go:89] found id: ""
	I1217 12:04:17.271898 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.271907 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:17.271914 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:17.271971 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:17.296098 3219848 cri.go:89] found id: ""
	I1217 12:04:17.296124 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.296133 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:17.296140 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:17.296202 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:17.321740 3219848 cri.go:89] found id: ""
	I1217 12:04:17.321767 3219848 logs.go:282] 0 containers: []
	W1217 12:04:17.321777 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:17.321788 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:17.321799 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:17.378911 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:17.378944 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:17.395425 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:17.395454 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:17.458148 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:17.450570    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.450926    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.452495    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.452908    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.454301    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:17.450570    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.450926    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.452495    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.452908    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:17.454301    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:17.458172 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:17.458185 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:17.483130 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:17.483199 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:20.011622 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:20.036129 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:20.036210 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:20.069785 3219848 cri.go:89] found id: ""
	I1217 12:04:20.069812 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.069820 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:20.069826 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:20.069891 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:20.118138 3219848 cri.go:89] found id: ""
	I1217 12:04:20.118165 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.118174 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:20.118180 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:20.118287 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:20.145219 3219848 cri.go:89] found id: ""
	I1217 12:04:20.145246 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.145267 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:20.145274 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:20.145340 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:20.171515 3219848 cri.go:89] found id: ""
	I1217 12:04:20.171541 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.171549 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:20.171556 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:20.171615 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:20.198371 3219848 cri.go:89] found id: ""
	I1217 12:04:20.198393 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.198409 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:20.198416 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:20.198476 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:20.226505 3219848 cri.go:89] found id: ""
	I1217 12:04:20.226529 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.226538 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:20.226544 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:20.226604 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:20.251848 3219848 cri.go:89] found id: ""
	I1217 12:04:20.251874 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.251883 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:20.251890 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:20.251951 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:20.281838 3219848 cri.go:89] found id: ""
	I1217 12:04:20.281863 3219848 logs.go:282] 0 containers: []
	W1217 12:04:20.281872 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:20.281887 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:20.281899 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:20.344875 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:20.336196    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.336887    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.338603    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.339150    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.340924    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:20.336196    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.336887    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.338603    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.339150    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:20.340924    7496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:20.344897 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:20.344909 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:20.370205 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:20.370244 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:20.403171 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:20.403203 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:20.459306 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:20.459342 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:22.976954 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:22.987706 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:22.987785 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:23.048240 3219848 cri.go:89] found id: ""
	I1217 12:04:23.048267 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.048276 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:23.048282 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:23.048342 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:23.098972 3219848 cri.go:89] found id: ""
	I1217 12:04:23.099001 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.099041 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:23.099055 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:23.099142 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:23.130170 3219848 cri.go:89] found id: ""
	I1217 12:04:23.130192 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.130201 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:23.130207 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:23.130266 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:23.157897 3219848 cri.go:89] found id: ""
	I1217 12:04:23.157919 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.157927 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:23.157933 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:23.157990 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:23.186732 3219848 cri.go:89] found id: ""
	I1217 12:04:23.186757 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.186766 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:23.186772 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:23.186834 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:23.211252 3219848 cri.go:89] found id: ""
	I1217 12:04:23.211278 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.211287 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:23.211294 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:23.211360 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:23.235484 3219848 cri.go:89] found id: ""
	I1217 12:04:23.235507 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.235516 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:23.235523 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:23.235593 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:23.263167 3219848 cri.go:89] found id: ""
	I1217 12:04:23.263195 3219848 logs.go:282] 0 containers: []
	W1217 12:04:23.263204 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:23.263213 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:23.263224 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:23.319468 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:23.319503 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:23.335277 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:23.335309 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:23.401412 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:23.393032    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.393444    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.395045    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.395905    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.397587    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:23.393032    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.393444    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.395045    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.395905    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:23.397587    7614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:23.401435 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:23.401447 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:23.427002 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:23.427042 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:25.955964 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:25.966813 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:25.966907 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:25.991674 3219848 cri.go:89] found id: ""
	I1217 12:04:25.991698 3219848 logs.go:282] 0 containers: []
	W1217 12:04:25.991707 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:25.991714 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:25.991828 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:26.043851 3219848 cri.go:89] found id: ""
	I1217 12:04:26.043878 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.043888 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:26.043895 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:26.043963 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:26.099675 3219848 cri.go:89] found id: ""
	I1217 12:04:26.099700 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.099708 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:26.099714 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:26.099786 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:26.129744 3219848 cri.go:89] found id: ""
	I1217 12:04:26.129768 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.129776 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:26.129783 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:26.129849 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:26.155393 3219848 cri.go:89] found id: ""
	I1217 12:04:26.155420 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.155428 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:26.155434 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:26.155492 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:26.182178 3219848 cri.go:89] found id: ""
	I1217 12:04:26.182200 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.182209 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:26.182216 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:26.182277 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:26.206976 3219848 cri.go:89] found id: ""
	I1217 12:04:26.207000 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.207009 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:26.207015 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:26.207072 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:26.231357 3219848 cri.go:89] found id: ""
	I1217 12:04:26.231383 3219848 logs.go:282] 0 containers: []
	W1217 12:04:26.231391 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:26.231400 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:26.231411 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:26.287609 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:26.287646 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:26.303654 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:26.303701 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:26.372084 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:26.363097    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.363759    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.365390    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.366039    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.367715    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:26.363097    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.363759    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.365390    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.366039    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:26.367715    7727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:26.372107 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:26.372122 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:26.398349 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:26.398386 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:28.926935 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:28.938567 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:28.938637 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:28.965018 3219848 cri.go:89] found id: ""
	I1217 12:04:28.965042 3219848 logs.go:282] 0 containers: []
	W1217 12:04:28.965050 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:28.965056 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:28.965116 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:28.993619 3219848 cri.go:89] found id: ""
	I1217 12:04:28.993646 3219848 logs.go:282] 0 containers: []
	W1217 12:04:28.993654 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:28.993661 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:28.993723 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:29.042253 3219848 cri.go:89] found id: ""
	I1217 12:04:29.042274 3219848 logs.go:282] 0 containers: []
	W1217 12:04:29.042282 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:29.042289 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:29.042347 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:29.109464 3219848 cri.go:89] found id: ""
	I1217 12:04:29.109486 3219848 logs.go:282] 0 containers: []
	W1217 12:04:29.109495 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:29.109501 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:29.109563 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:29.139820 3219848 cri.go:89] found id: ""
	I1217 12:04:29.139842 3219848 logs.go:282] 0 containers: []
	W1217 12:04:29.139850 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:29.139857 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:29.139917 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:29.165440 3219848 cri.go:89] found id: ""
	I1217 12:04:29.165465 3219848 logs.go:282] 0 containers: []
	W1217 12:04:29.165474 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:29.165481 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:29.165543 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:29.191572 3219848 cri.go:89] found id: ""
	I1217 12:04:29.191597 3219848 logs.go:282] 0 containers: []
	W1217 12:04:29.191606 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:29.191613 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:29.191673 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:29.217986 3219848 cri.go:89] found id: ""
	I1217 12:04:29.218011 3219848 logs.go:282] 0 containers: []
	W1217 12:04:29.218020 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:29.218030 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:29.218041 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:29.274933 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:29.274967 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:29.290733 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:29.290760 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:29.358661 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:29.349933    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.350736    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.352310    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.352861    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.354377    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:29.349933    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.350736    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.352310    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.352861    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:29.354377    7839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:29.358683 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:29.358697 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:29.385070 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:29.385107 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:31.914639 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:31.928018 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:31.928092 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:31.955140 3219848 cri.go:89] found id: ""
	I1217 12:04:31.955163 3219848 logs.go:282] 0 containers: []
	W1217 12:04:31.955171 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:31.955178 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:31.955252 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:31.982332 3219848 cri.go:89] found id: ""
	I1217 12:04:31.982364 3219848 logs.go:282] 0 containers: []
	W1217 12:04:31.982380 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:31.982387 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:31.982448 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:32.045708 3219848 cri.go:89] found id: ""
	I1217 12:04:32.045731 3219848 logs.go:282] 0 containers: []
	W1217 12:04:32.045740 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:32.045746 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:32.045805 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:32.093198 3219848 cri.go:89] found id: ""
	I1217 12:04:32.093220 3219848 logs.go:282] 0 containers: []
	W1217 12:04:32.093229 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:32.093242 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:32.093301 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:32.120574 3219848 cri.go:89] found id: ""
	I1217 12:04:32.120641 3219848 logs.go:282] 0 containers: []
	W1217 12:04:32.120664 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:32.120684 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:32.120772 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:32.151069 3219848 cri.go:89] found id: ""
	I1217 12:04:32.151137 3219848 logs.go:282] 0 containers: []
	W1217 12:04:32.151160 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:32.151182 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:32.151272 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:32.181226 3219848 cri.go:89] found id: ""
	I1217 12:04:32.181303 3219848 logs.go:282] 0 containers: []
	W1217 12:04:32.181326 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:32.181347 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:32.181439 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:32.207237 3219848 cri.go:89] found id: ""
	I1217 12:04:32.207295 3219848 logs.go:282] 0 containers: []
	W1217 12:04:32.207310 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:32.207324 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:32.207336 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:32.263771 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:32.263808 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:32.279666 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:32.279693 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:32.345645 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:32.336846    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.338076    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.338956    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.340462    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.340816    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:32.336846    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.338076    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.338956    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.340462    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:32.340816    7950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:32.345666 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:32.345679 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:32.371311 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:32.371347 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:34.899829 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:34.911276 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:34.911354 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:34.936056 3219848 cri.go:89] found id: ""
	I1217 12:04:34.936080 3219848 logs.go:282] 0 containers: []
	W1217 12:04:34.936089 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:34.936096 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:34.936156 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:34.962166 3219848 cri.go:89] found id: ""
	I1217 12:04:34.962192 3219848 logs.go:282] 0 containers: []
	W1217 12:04:34.962201 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:34.962207 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:34.962271 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:34.987891 3219848 cri.go:89] found id: ""
	I1217 12:04:34.987916 3219848 logs.go:282] 0 containers: []
	W1217 12:04:34.987926 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:34.987934 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:34.987994 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:35.036291 3219848 cri.go:89] found id: ""
	I1217 12:04:35.036319 3219848 logs.go:282] 0 containers: []
	W1217 12:04:35.036331 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:35.036339 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:35.036402 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:35.091997 3219848 cri.go:89] found id: ""
	I1217 12:04:35.092023 3219848 logs.go:282] 0 containers: []
	W1217 12:04:35.092041 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:35.092049 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:35.092119 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:35.126699 3219848 cri.go:89] found id: ""
	I1217 12:04:35.126721 3219848 logs.go:282] 0 containers: []
	W1217 12:04:35.126736 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:35.126743 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:35.126802 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:35.152052 3219848 cri.go:89] found id: ""
	I1217 12:04:35.152077 3219848 logs.go:282] 0 containers: []
	W1217 12:04:35.152087 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:35.152094 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:35.152156 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:35.177868 3219848 cri.go:89] found id: ""
	I1217 12:04:35.177897 3219848 logs.go:282] 0 containers: []
	W1217 12:04:35.177906 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:35.177916 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:35.177955 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:35.213172 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:35.213200 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:35.269771 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:35.269807 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:35.285802 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:35.285841 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:35.355953 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:35.345556    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.346168    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.348018    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.348336    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.351480    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:35.345556    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.346168    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.348018    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.348336    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:35.351480    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:35.355976 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:35.355988 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:37.883397 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:37.894032 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:37.894101 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:37.927040 3219848 cri.go:89] found id: ""
	I1217 12:04:37.927066 3219848 logs.go:282] 0 containers: []
	W1217 12:04:37.927075 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:37.927085 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:37.927150 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:37.951890 3219848 cri.go:89] found id: ""
	I1217 12:04:37.951916 3219848 logs.go:282] 0 containers: []
	W1217 12:04:37.951925 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:37.951931 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:37.951995 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:37.978258 3219848 cri.go:89] found id: ""
	I1217 12:04:37.978286 3219848 logs.go:282] 0 containers: []
	W1217 12:04:37.978295 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:37.978302 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:37.978383 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:38.032665 3219848 cri.go:89] found id: ""
	I1217 12:04:38.032689 3219848 logs.go:282] 0 containers: []
	W1217 12:04:38.032698 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:38.032705 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:38.032770 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:38.068588 3219848 cri.go:89] found id: ""
	I1217 12:04:38.068617 3219848 logs.go:282] 0 containers: []
	W1217 12:04:38.068626 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:38.068633 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:38.068703 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:38.111074 3219848 cri.go:89] found id: ""
	I1217 12:04:38.111102 3219848 logs.go:282] 0 containers: []
	W1217 12:04:38.111112 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:38.111119 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:38.111183 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:38.139962 3219848 cri.go:89] found id: ""
	I1217 12:04:38.139989 3219848 logs.go:282] 0 containers: []
	W1217 12:04:38.139998 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:38.140005 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:38.140071 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:38.165120 3219848 cri.go:89] found id: ""
	I1217 12:04:38.165147 3219848 logs.go:282] 0 containers: []
	W1217 12:04:38.165156 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:38.165165 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:38.165176 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:38.221183 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:38.221218 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:38.237532 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:38.237565 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:38.307341 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:38.299115    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.299933    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.301496    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.301856    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.303152    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:38.299115    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.299933    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.301496    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.301856    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:38.303152    8174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:38.307362 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:38.307376 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:38.333705 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:38.333739 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:40.864326 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:40.875421 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:40.875500 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:40.900554 3219848 cri.go:89] found id: ""
	I1217 12:04:40.900576 3219848 logs.go:282] 0 containers: []
	W1217 12:04:40.900586 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:40.900592 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:40.900654 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:40.926107 3219848 cri.go:89] found id: ""
	I1217 12:04:40.926134 3219848 logs.go:282] 0 containers: []
	W1217 12:04:40.926143 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:40.926151 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:40.926210 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:40.951315 3219848 cri.go:89] found id: ""
	I1217 12:04:40.951341 3219848 logs.go:282] 0 containers: []
	W1217 12:04:40.951350 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:40.951356 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:40.951414 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:40.976682 3219848 cri.go:89] found id: ""
	I1217 12:04:40.976713 3219848 logs.go:282] 0 containers: []
	W1217 12:04:40.976723 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:40.976731 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:40.976790 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:41.016365 3219848 cri.go:89] found id: ""
	I1217 12:04:41.016388 3219848 logs.go:282] 0 containers: []
	W1217 12:04:41.016396 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:41.016403 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:41.016527 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:41.081810 3219848 cri.go:89] found id: ""
	I1217 12:04:41.081838 3219848 logs.go:282] 0 containers: []
	W1217 12:04:41.081848 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:41.081856 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:41.081915 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:41.107919 3219848 cri.go:89] found id: ""
	I1217 12:04:41.107946 3219848 logs.go:282] 0 containers: []
	W1217 12:04:41.107955 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:41.107962 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:41.108032 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:41.134563 3219848 cri.go:89] found id: ""
	I1217 12:04:41.134589 3219848 logs.go:282] 0 containers: []
	W1217 12:04:41.134599 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:41.134608 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:41.134619 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:41.192325 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:41.192362 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:41.208694 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:41.208723 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:41.279184 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:41.267762    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.268617    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.272813    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.273616    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.275116    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:41.267762    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.268617    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.272813    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.273616    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:41.275116    8290 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:41.279207 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:41.279221 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:41.305398 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:41.305436 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:43.838273 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:43.849251 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:43.849321 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:43.873599 3219848 cri.go:89] found id: ""
	I1217 12:04:43.873671 3219848 logs.go:282] 0 containers: []
	W1217 12:04:43.873686 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:43.873694 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:43.873756 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:43.902353 3219848 cri.go:89] found id: ""
	I1217 12:04:43.902378 3219848 logs.go:282] 0 containers: []
	W1217 12:04:43.902388 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:43.902395 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:43.902486 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:43.928175 3219848 cri.go:89] found id: ""
	I1217 12:04:43.928202 3219848 logs.go:282] 0 containers: []
	W1217 12:04:43.928213 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:43.928220 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:43.928334 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:43.956883 3219848 cri.go:89] found id: ""
	I1217 12:04:43.956912 3219848 logs.go:282] 0 containers: []
	W1217 12:04:43.956921 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:43.956927 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:43.956996 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:43.982931 3219848 cri.go:89] found id: ""
	I1217 12:04:43.982968 3219848 logs.go:282] 0 containers: []
	W1217 12:04:43.982979 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:43.982986 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:43.983053 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:44.030268 3219848 cri.go:89] found id: ""
	I1217 12:04:44.030294 3219848 logs.go:282] 0 containers: []
	W1217 12:04:44.030304 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:44.030311 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:44.030388 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:44.082991 3219848 cri.go:89] found id: ""
	I1217 12:04:44.083021 3219848 logs.go:282] 0 containers: []
	W1217 12:04:44.083042 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:44.083049 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:44.083140 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:44.113120 3219848 cri.go:89] found id: ""
	I1217 12:04:44.113165 3219848 logs.go:282] 0 containers: []
	W1217 12:04:44.113175 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:44.113185 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:44.113204 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:44.172933 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:44.172970 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:44.189039 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:44.189066 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:44.257898 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:44.249336    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.250068    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.251815    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.252362    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.254049    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:44.249336    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.250068    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.251815    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.252362    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:44.254049    8401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:44.257924 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:44.257937 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:44.283680 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:44.283715 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:46.821352 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:46.832441 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:46.832520 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:46.858364 3219848 cri.go:89] found id: ""
	I1217 12:04:46.858390 3219848 logs.go:282] 0 containers: []
	W1217 12:04:46.858400 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:46.858407 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:46.858488 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:46.882836 3219848 cri.go:89] found id: ""
	I1217 12:04:46.882868 3219848 logs.go:282] 0 containers: []
	W1217 12:04:46.882876 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:46.882883 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:46.882952 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:46.907815 3219848 cri.go:89] found id: ""
	I1217 12:04:46.907852 3219848 logs.go:282] 0 containers: []
	W1217 12:04:46.907861 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:46.907888 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:46.907972 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:46.933329 3219848 cri.go:89] found id: ""
	I1217 12:04:46.933353 3219848 logs.go:282] 0 containers: []
	W1217 12:04:46.933363 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:46.933377 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:46.933445 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:46.959520 3219848 cri.go:89] found id: ""
	I1217 12:04:46.959546 3219848 logs.go:282] 0 containers: []
	W1217 12:04:46.959555 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:46.959562 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:46.959621 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:46.986527 3219848 cri.go:89] found id: ""
	I1217 12:04:46.986551 3219848 logs.go:282] 0 containers: []
	W1217 12:04:46.986561 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:46.986567 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:46.986627 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:47.034742 3219848 cri.go:89] found id: ""
	I1217 12:04:47.034765 3219848 logs.go:282] 0 containers: []
	W1217 12:04:47.034775 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:47.034781 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:47.034838 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:47.072115 3219848 cri.go:89] found id: ""
	I1217 12:04:47.072143 3219848 logs.go:282] 0 containers: []
	W1217 12:04:47.072152 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:47.072161 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:47.072173 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:47.138106 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:47.138141 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:47.156338 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:47.156381 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:47.224864 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:47.215946    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.216453    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.218361    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.219127    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.220895    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:47.215946    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.216453    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.218361    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.219127    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:47.220895    8513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:47.224889 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:47.224900 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:47.250608 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:47.250644 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:49.780985 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:49.791927 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:49.792002 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:49.817502 3219848 cri.go:89] found id: ""
	I1217 12:04:49.817526 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.817536 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:49.817542 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:49.817621 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:49.844464 3219848 cri.go:89] found id: ""
	I1217 12:04:49.844490 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.844499 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:49.844506 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:49.844614 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:49.874956 3219848 cri.go:89] found id: ""
	I1217 12:04:49.874982 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.874991 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:49.874998 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:49.875079 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:49.904772 3219848 cri.go:89] found id: ""
	I1217 12:04:49.904795 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.904804 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:49.904810 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:49.904872 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:49.934337 3219848 cri.go:89] found id: ""
	I1217 12:04:49.934362 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.934372 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:49.934379 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:49.934472 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:49.959338 3219848 cri.go:89] found id: ""
	I1217 12:04:49.959362 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.959371 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:49.959378 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:49.959481 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:49.984578 3219848 cri.go:89] found id: ""
	I1217 12:04:49.984606 3219848 logs.go:282] 0 containers: []
	W1217 12:04:49.984614 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:49.984621 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:49.984679 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:50.043309 3219848 cri.go:89] found id: ""
	I1217 12:04:50.043395 3219848 logs.go:282] 0 containers: []
	W1217 12:04:50.043419 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:50.043456 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:50.043486 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:50.135752 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:50.127538    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.128073    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.129753    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.130124    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.131773    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:50.127538    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.128073    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.129753    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.130124    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:50.131773    8621 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:50.135777 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:50.135792 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:50.162030 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:50.162067 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:50.196447 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:50.196478 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:50.254281 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:50.254318 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:52.772408 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:52.783553 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:52.783633 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:52.820008 3219848 cri.go:89] found id: ""
	I1217 12:04:52.820043 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.820058 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:52.820065 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:52.820129 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:52.844905 3219848 cri.go:89] found id: ""
	I1217 12:04:52.844941 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.844949 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:52.844956 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:52.845029 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:52.869543 3219848 cri.go:89] found id: ""
	I1217 12:04:52.869569 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.869586 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:52.869622 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:52.869698 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:52.894131 3219848 cri.go:89] found id: ""
	I1217 12:04:52.894160 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.894170 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:52.894177 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:52.894266 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:52.921694 3219848 cri.go:89] found id: ""
	I1217 12:04:52.921719 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.921729 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:52.921736 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:52.921795 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:52.947377 3219848 cri.go:89] found id: ""
	I1217 12:04:52.947411 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.947421 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:52.947452 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:52.947531 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:52.972742 3219848 cri.go:89] found id: ""
	I1217 12:04:52.972768 3219848 logs.go:282] 0 containers: []
	W1217 12:04:52.972777 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:52.972787 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:52.972866 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:53.016484 3219848 cri.go:89] found id: ""
	I1217 12:04:53.016566 3219848 logs.go:282] 0 containers: []
	W1217 12:04:53.016588 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:53.016612 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:53.016657 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:53.091083 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:53.091153 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:53.109051 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:53.109075 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:53.174985 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:53.166259    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.167099    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.168974    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.169308    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.170842    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:53.166259    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.167099    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.168974    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.169308    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:53.170842    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:53.175008 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:53.175021 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:53.201645 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:53.201680 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:55.729262 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:55.742969 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:55.743043 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:55.772352 3219848 cri.go:89] found id: ""
	I1217 12:04:55.772374 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.772383 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:55.772389 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:55.772461 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:55.799085 3219848 cri.go:89] found id: ""
	I1217 12:04:55.799111 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.799120 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:55.799126 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:55.799191 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:55.825805 3219848 cri.go:89] found id: ""
	I1217 12:04:55.825830 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.825839 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:55.825846 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:55.825907 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:55.855875 3219848 cri.go:89] found id: ""
	I1217 12:04:55.855964 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.855979 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:55.855987 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:55.856055 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:55.881512 3219848 cri.go:89] found id: ""
	I1217 12:04:55.881539 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.881548 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:55.881555 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:55.881615 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:55.911117 3219848 cri.go:89] found id: ""
	I1217 12:04:55.911149 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.911158 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:55.911165 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:55.911236 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:55.936738 3219848 cri.go:89] found id: ""
	I1217 12:04:55.936774 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.936783 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:55.936790 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:55.936865 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:55.962878 3219848 cri.go:89] found id: ""
	I1217 12:04:55.962904 3219848 logs.go:282] 0 containers: []
	W1217 12:04:55.962918 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:55.962937 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:55.962950 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:55.991943 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:55.991988 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:04:56.062887 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:56.062922 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:56.129315 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:56.129356 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:56.145986 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:56.146013 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:56.214623 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:56.205795    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.206560    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.208125    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.208507    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.210116    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:56.205795    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.206560    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.208125    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.208507    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:56.210116    8868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:58.715974 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:04:58.727395 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:04:58.727466 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:04:58.751936 3219848 cri.go:89] found id: ""
	I1217 12:04:58.751961 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.751970 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:04:58.751977 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:04:58.752036 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:04:58.778416 3219848 cri.go:89] found id: ""
	I1217 12:04:58.778439 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.778447 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:04:58.778454 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:04:58.778517 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:04:58.806136 3219848 cri.go:89] found id: ""
	I1217 12:04:58.806160 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.806169 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:04:58.806175 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:04:58.806233 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:04:58.835276 3219848 cri.go:89] found id: ""
	I1217 12:04:58.835311 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.835321 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:04:58.835328 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:04:58.835396 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:04:58.862517 3219848 cri.go:89] found id: ""
	I1217 12:04:58.862596 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.862612 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:04:58.862620 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:04:58.862695 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:04:58.888027 3219848 cri.go:89] found id: ""
	I1217 12:04:58.888055 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.888065 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:04:58.888072 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:04:58.888156 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:04:58.913027 3219848 cri.go:89] found id: ""
	I1217 12:04:58.913106 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.913123 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:04:58.913132 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:04:58.913210 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:04:58.938554 3219848 cri.go:89] found id: ""
	I1217 12:04:58.938578 3219848 logs.go:282] 0 containers: []
	W1217 12:04:58.938587 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:04:58.938599 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:04:58.938611 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:04:58.995142 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:04:58.995175 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:04:59.026309 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:04:59.026388 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:04:59.124135 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:04:59.115677    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.117093    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.117405    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.118755    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.119195    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:04:59.115677    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.117093    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.117405    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.118755    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:04:59.119195    8970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:04:59.124157 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:04:59.124170 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:04:59.149882 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:04:59.149925 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:01.680518 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:01.692630 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:01.692709 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:01.721621 3219848 cri.go:89] found id: ""
	I1217 12:05:01.721647 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.721656 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:01.721664 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:01.721731 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:01.748186 3219848 cri.go:89] found id: ""
	I1217 12:05:01.748213 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.748232 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:01.748239 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:01.748310 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:01.774670 3219848 cri.go:89] found id: ""
	I1217 12:05:01.774694 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.774703 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:01.774709 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:01.774770 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:01.800533 3219848 cri.go:89] found id: ""
	I1217 12:05:01.800609 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.800635 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:01.800649 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:01.800726 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:01.832193 3219848 cri.go:89] found id: ""
	I1217 12:05:01.832221 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.832230 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:01.832238 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:01.832314 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:01.859699 3219848 cri.go:89] found id: ""
	I1217 12:05:01.859733 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.859743 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:01.859750 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:01.859825 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:01.891844 3219848 cri.go:89] found id: ""
	I1217 12:05:01.891869 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.891893 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:01.891901 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:01.891988 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:01.922765 3219848 cri.go:89] found id: ""
	I1217 12:05:01.922791 3219848 logs.go:282] 0 containers: []
	W1217 12:05:01.922801 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:01.922811 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:01.922821 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:01.984618 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:01.984654 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:02.003531 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:02.003573 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:02.119039 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:02.109047    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.109431    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.111503    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.112283    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.113931    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:02.109047    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.109431    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.111503    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.112283    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:02.113931    9084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:02.119062 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:02.119074 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:02.145052 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:02.145090 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:04.675110 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:04.686658 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:04.686731 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:04.714143 3219848 cri.go:89] found id: ""
	I1217 12:05:04.714169 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.714178 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:04.714185 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:04.714246 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:04.741446 3219848 cri.go:89] found id: ""
	I1217 12:05:04.741472 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.741481 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:04.741488 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:04.741549 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:04.771197 3219848 cri.go:89] found id: ""
	I1217 12:05:04.771224 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.771234 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:04.771241 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:04.771305 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:04.801798 3219848 cri.go:89] found id: ""
	I1217 12:05:04.801824 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.801834 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:04.801840 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:04.801901 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:04.827212 3219848 cri.go:89] found id: ""
	I1217 12:05:04.827240 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.827249 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:04.827257 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:04.827322 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:04.852794 3219848 cri.go:89] found id: ""
	I1217 12:05:04.852821 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.852831 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:04.852838 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:04.852898 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:04.879034 3219848 cri.go:89] found id: ""
	I1217 12:05:04.879058 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.879069 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:04.879075 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:04.879134 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:04.904782 3219848 cri.go:89] found id: ""
	I1217 12:05:04.904806 3219848 logs.go:282] 0 containers: []
	W1217 12:05:04.904814 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:04.904823 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:04.904833 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:04.961550 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:04.961581 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:04.977831 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:04.977861 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:05.101127 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:05.083862    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.093276    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.094908    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.095507    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.097102    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:05.083862    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.093276    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.094908    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.095507    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:05.097102    9194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:05.101155 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:05.101168 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:05.128517 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:05.128550 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:07.660217 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:07.670837 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:07.670907 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:07.696773 3219848 cri.go:89] found id: ""
	I1217 12:05:07.696800 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.696809 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:07.696816 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:07.696873 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:07.722665 3219848 cri.go:89] found id: ""
	I1217 12:05:07.722688 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.722697 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:07.722703 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:07.722770 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:07.748882 3219848 cri.go:89] found id: ""
	I1217 12:05:07.748907 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.748916 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:07.748922 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:07.748983 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:07.777951 3219848 cri.go:89] found id: ""
	I1217 12:05:07.777976 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.777985 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:07.777992 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:07.778052 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:07.807386 3219848 cri.go:89] found id: ""
	I1217 12:05:07.807414 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.807423 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:07.807430 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:07.807492 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:07.836910 3219848 cri.go:89] found id: ""
	I1217 12:05:07.836938 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.836947 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:07.836954 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:07.837012 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:07.861301 3219848 cri.go:89] found id: ""
	I1217 12:05:07.861327 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.861337 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:07.861343 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:07.861402 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:07.885389 3219848 cri.go:89] found id: ""
	I1217 12:05:07.885412 3219848 logs.go:282] 0 containers: []
	W1217 12:05:07.885422 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:07.885431 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:07.885444 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:07.940922 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:07.940954 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:07.956764 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:07.956792 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:08.040092 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:08.023500    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.024045    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.030582    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.031321    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.035763    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:08.023500    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.024045    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.030582    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.031321    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:08.035763    9307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:08.040167 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:08.040195 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:08.076595 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:08.076674 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:10.614548 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:10.625273 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:10.625344 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:10.649744 3219848 cri.go:89] found id: ""
	I1217 12:05:10.649774 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.649782 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:10.649789 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:10.649847 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:10.673909 3219848 cri.go:89] found id: ""
	I1217 12:05:10.673936 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.673945 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:10.673952 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:10.674010 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:10.699817 3219848 cri.go:89] found id: ""
	I1217 12:05:10.699840 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.699849 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:10.699855 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:10.699914 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:10.724608 3219848 cri.go:89] found id: ""
	I1217 12:05:10.724630 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.724638 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:10.724645 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:10.724702 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:10.756858 3219848 cri.go:89] found id: ""
	I1217 12:05:10.756883 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.756892 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:10.756899 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:10.756959 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:10.787011 3219848 cri.go:89] found id: ""
	I1217 12:05:10.787037 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.787046 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:10.787052 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:10.787111 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:10.816658 3219848 cri.go:89] found id: ""
	I1217 12:05:10.816683 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.816691 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:10.816698 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:10.816757 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:10.841817 3219848 cri.go:89] found id: ""
	I1217 12:05:10.841882 3219848 logs.go:282] 0 containers: []
	W1217 12:05:10.841899 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:10.841909 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:10.841920 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:10.899952 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:10.899994 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:10.915585 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:10.915615 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:10.983597 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:10.975197    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.975853    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.977490    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.978147    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.979658    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:10.975197    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.975853    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.977490    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.978147    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:10.979658    9421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:10.983619 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:10.983636 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:11.013827 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:11.013865 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:13.590017 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:13.601224 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:13.601300 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:13.630748 3219848 cri.go:89] found id: ""
	I1217 12:05:13.630771 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.630781 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:13.630788 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:13.630845 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:13.659125 3219848 cri.go:89] found id: ""
	I1217 12:05:13.659150 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.659160 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:13.659166 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:13.659224 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:13.689040 3219848 cri.go:89] found id: ""
	I1217 12:05:13.689066 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.689075 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:13.689082 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:13.689149 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:13.713917 3219848 cri.go:89] found id: ""
	I1217 12:05:13.713941 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.713949 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:13.713956 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:13.714016 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:13.738663 3219848 cri.go:89] found id: ""
	I1217 12:05:13.738686 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.738695 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:13.738701 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:13.738759 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:13.762897 3219848 cri.go:89] found id: ""
	I1217 12:05:13.762922 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.762931 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:13.762938 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:13.762995 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:13.791695 3219848 cri.go:89] found id: ""
	I1217 12:05:13.791720 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.791736 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:13.791743 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:13.791800 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:13.821207 3219848 cri.go:89] found id: ""
	I1217 12:05:13.821230 3219848 logs.go:282] 0 containers: []
	W1217 12:05:13.821239 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:13.821248 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:13.821259 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:13.848837 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:13.848867 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:13.906239 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:13.906278 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:13.921882 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:13.921917 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:13.991574 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:13.982172    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.983086    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.985111    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.985659    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.986629    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:13.982172    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.983086    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.985111    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.985659    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:13.986629    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:13.991596 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:13.991609 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:16.525032 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:16.535486 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:16.535556 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:16.561699 3219848 cri.go:89] found id: ""
	I1217 12:05:16.561721 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.561730 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:16.561736 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:16.561792 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:16.586264 3219848 cri.go:89] found id: ""
	I1217 12:05:16.586287 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.586296 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:16.586303 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:16.586360 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:16.611385 3219848 cri.go:89] found id: ""
	I1217 12:05:16.611409 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.611418 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:16.611425 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:16.611485 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:16.636230 3219848 cri.go:89] found id: ""
	I1217 12:05:16.636256 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.636267 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:16.636274 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:16.636332 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:16.660919 3219848 cri.go:89] found id: ""
	I1217 12:05:16.660942 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.660950 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:16.660956 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:16.661013 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:16.688962 3219848 cri.go:89] found id: ""
	I1217 12:05:16.688987 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.688996 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:16.689003 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:16.689070 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:16.719405 3219848 cri.go:89] found id: ""
	I1217 12:05:16.719428 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.719437 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:16.719443 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:16.719502 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:16.745166 3219848 cri.go:89] found id: ""
	I1217 12:05:16.745192 3219848 logs.go:282] 0 containers: []
	W1217 12:05:16.745201 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:16.745211 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:16.745223 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:16.771975 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:16.772014 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:16.804149 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:16.804180 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:16.861212 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:16.861249 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:16.877226 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:16.877257 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:16.943896 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:16.935292    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.935946    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.937663    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.938200    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.939861    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:16.935292    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.935946    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.937663    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.938200    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:16.939861    9657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:19.444922 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:19.455525 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:19.455598 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:19.480970 3219848 cri.go:89] found id: ""
	I1217 12:05:19.480995 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.481006 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:19.481017 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:19.481079 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:19.506235 3219848 cri.go:89] found id: ""
	I1217 12:05:19.506258 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.506267 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:19.506274 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:19.506333 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:19.532063 3219848 cri.go:89] found id: ""
	I1217 12:05:19.532086 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.532095 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:19.532105 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:19.532165 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:19.562427 3219848 cri.go:89] found id: ""
	I1217 12:05:19.562450 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.562460 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:19.562466 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:19.562524 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:19.587869 3219848 cri.go:89] found id: ""
	I1217 12:05:19.587903 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.587912 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:19.587919 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:19.587990 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:19.612889 3219848 cri.go:89] found id: ""
	I1217 12:05:19.612916 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.612925 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:19.612932 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:19.612990 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:19.637949 3219848 cri.go:89] found id: ""
	I1217 12:05:19.637972 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.637980 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:19.637992 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:19.638053 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:19.666633 3219848 cri.go:89] found id: ""
	I1217 12:05:19.666703 3219848 logs.go:282] 0 containers: []
	W1217 12:05:19.666740 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:19.666769 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:19.666798 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:19.726394 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:19.726430 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:19.742581 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:19.742662 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:19.807145 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:19.798144    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.799463    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.800143    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.801047    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.802652    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:19.798144    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.799463    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.800143    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.801047    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:19.802652    9758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:19.807174 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:19.807187 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:19.832758 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:19.832792 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:22.366107 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:22.376592 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:22.376666 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:22.401822 3219848 cri.go:89] found id: ""
	I1217 12:05:22.401847 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.401857 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:22.401863 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:22.401921 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:22.425903 3219848 cri.go:89] found id: ""
	I1217 12:05:22.425927 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.425936 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:22.425943 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:22.426008 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:22.454459 3219848 cri.go:89] found id: ""
	I1217 12:05:22.454484 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.454493 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:22.454499 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:22.454559 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:22.479178 3219848 cri.go:89] found id: ""
	I1217 12:05:22.479202 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.479212 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:22.479219 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:22.479276 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:22.505859 3219848 cri.go:89] found id: ""
	I1217 12:05:22.505885 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.505900 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:22.505908 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:22.505995 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:22.531485 3219848 cri.go:89] found id: ""
	I1217 12:05:22.531506 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.531515 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:22.531523 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:22.531583 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:22.558267 3219848 cri.go:89] found id: ""
	I1217 12:05:22.558343 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.558360 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:22.558367 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:22.558427 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:22.588380 3219848 cri.go:89] found id: ""
	I1217 12:05:22.588431 3219848 logs.go:282] 0 containers: []
	W1217 12:05:22.588442 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:22.588451 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:22.588463 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:22.647590 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:22.647629 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:22.665568 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:22.665597 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:22.738273 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:22.729900    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.730477    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.732137    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.732564    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.734423    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:22.729900    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.730477    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.732137    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.732564    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:22.734423    9870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:22.738298 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:22.738310 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:22.764468 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:22.764503 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:25.296756 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:25.320288 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:25.320356 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:25.347938 3219848 cri.go:89] found id: ""
	I1217 12:05:25.347959 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.347967 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:25.347973 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:25.348030 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:25.375407 3219848 cri.go:89] found id: ""
	I1217 12:05:25.375428 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.375438 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:25.375444 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:25.375501 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:25.400165 3219848 cri.go:89] found id: ""
	I1217 12:05:25.400187 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.400195 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:25.400202 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:25.400266 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:25.428203 3219848 cri.go:89] found id: ""
	I1217 12:05:25.428229 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.428240 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:25.428247 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:25.428307 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:25.454651 3219848 cri.go:89] found id: ""
	I1217 12:05:25.454675 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.454685 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:25.454692 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:25.454754 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:25.478961 3219848 cri.go:89] found id: ""
	I1217 12:05:25.478987 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.478996 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:25.479003 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:25.479088 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:25.508637 3219848 cri.go:89] found id: ""
	I1217 12:05:25.508661 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.508670 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:25.508676 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:25.508782 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:25.534245 3219848 cri.go:89] found id: ""
	I1217 12:05:25.534270 3219848 logs.go:282] 0 containers: []
	W1217 12:05:25.534279 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:25.534289 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:25.534306 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:25.569632 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:25.569662 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:25.625748 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:25.625783 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:25.641383 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:25.641409 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:25.709135 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:25.700728    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.701299    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.703107    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.703541    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.705177    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:25.700728    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.701299    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.703107    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.703541    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:25.705177    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:25.709156 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:25.709168 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:28.233802 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:28.244795 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:28.244872 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:28.291390 3219848 cri.go:89] found id: ""
	I1217 12:05:28.291412 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.291421 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:28.291427 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:28.291488 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:28.355887 3219848 cri.go:89] found id: ""
	I1217 12:05:28.355909 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.355917 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:28.355924 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:28.355983 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:28.381610 3219848 cri.go:89] found id: ""
	I1217 12:05:28.381633 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.381641 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:28.381647 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:28.381707 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:28.407516 3219848 cri.go:89] found id: ""
	I1217 12:05:28.407544 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.407553 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:28.407560 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:28.407622 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:28.436914 3219848 cri.go:89] found id: ""
	I1217 12:05:28.436982 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.437006 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:28.437021 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:28.437098 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:28.461189 3219848 cri.go:89] found id: ""
	I1217 12:05:28.461258 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.461283 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:28.461298 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:28.461373 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:28.490913 3219848 cri.go:89] found id: ""
	I1217 12:05:28.490948 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.490958 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:28.490965 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:28.491033 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:28.521566 3219848 cri.go:89] found id: ""
	I1217 12:05:28.521589 3219848 logs.go:282] 0 containers: []
	W1217 12:05:28.521599 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:28.521610 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:28.521622 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:28.577123 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:28.577159 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:28.593088 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:28.593119 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:28.655447 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:28.646846   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.647657   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.649169   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.649707   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.651192   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:28.646846   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.647657   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.649169   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.649707   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:28.651192   10096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:28.655472 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:28.655484 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:28.680532 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:28.680566 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:31.213979 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:31.224716 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:31.224784 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:31.257050 3219848 cri.go:89] found id: ""
	I1217 12:05:31.257071 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.257079 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:31.257085 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:31.257141 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:31.315656 3219848 cri.go:89] found id: ""
	I1217 12:05:31.315677 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.315686 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:31.315692 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:31.315746 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:31.349340 3219848 cri.go:89] found id: ""
	I1217 12:05:31.349360 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.349369 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:31.349375 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:31.349432 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:31.374728 3219848 cri.go:89] found id: ""
	I1217 12:05:31.374755 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.374764 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:31.374771 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:31.374833 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:31.401386 3219848 cri.go:89] found id: ""
	I1217 12:05:31.401422 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.401432 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:31.401439 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:31.401511 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:31.427234 3219848 cri.go:89] found id: ""
	I1217 12:05:31.427260 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.427270 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:31.427277 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:31.427338 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:31.452628 3219848 cri.go:89] found id: ""
	I1217 12:05:31.452666 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.452676 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:31.452684 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:31.452756 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:31.476684 3219848 cri.go:89] found id: ""
	I1217 12:05:31.476717 3219848 logs.go:282] 0 containers: []
	W1217 12:05:31.476725 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:31.476735 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:31.476745 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:31.533895 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:31.533928 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:31.549405 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:31.549433 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:31.617988 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:31.609983   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.610706   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.612313   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.612915   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.613854   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:31.609983   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.610706   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.612313   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.612915   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:31.613854   10209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:31.618022 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:31.618051 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:31.643544 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:31.643575 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:34.173214 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:34.183798 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:34.183881 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:34.208274 3219848 cri.go:89] found id: ""
	I1217 12:05:34.208299 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.208309 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:34.208315 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:34.208377 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:34.232844 3219848 cri.go:89] found id: ""
	I1217 12:05:34.232870 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.232879 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:34.232886 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:34.232947 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:34.298630 3219848 cri.go:89] found id: ""
	I1217 12:05:34.298656 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.298665 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:34.298672 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:34.298732 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:34.352614 3219848 cri.go:89] found id: ""
	I1217 12:05:34.352657 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.352672 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:34.352679 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:34.352745 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:34.378134 3219848 cri.go:89] found id: ""
	I1217 12:05:34.378156 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.378165 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:34.378171 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:34.378234 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:34.402637 3219848 cri.go:89] found id: ""
	I1217 12:05:34.402660 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.402668 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:34.402675 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:34.402758 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:34.428834 3219848 cri.go:89] found id: ""
	I1217 12:05:34.428906 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.428941 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:34.428948 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:34.429006 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:34.459618 3219848 cri.go:89] found id: ""
	I1217 12:05:34.459641 3219848 logs.go:282] 0 containers: []
	W1217 12:05:34.459654 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:34.459663 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:34.459674 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:34.514834 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:34.514867 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:34.531691 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:34.531717 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:34.603404 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:34.594905   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.595653   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.597085   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.597664   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.599260   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:34.594905   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.595653   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.597085   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.597664   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:34.599260   10325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:34.603478 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:34.603498 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:34.629092 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:34.629131 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:37.158533 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:37.170305 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:37.170377 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:37.195895 3219848 cri.go:89] found id: ""
	I1217 12:05:37.195920 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.195929 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:37.195936 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:37.195994 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:37.221126 3219848 cri.go:89] found id: ""
	I1217 12:05:37.221153 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.221162 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:37.221170 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:37.221228 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:37.246560 3219848 cri.go:89] found id: ""
	I1217 12:05:37.246584 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.246593 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:37.246600 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:37.246663 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:37.289595 3219848 cri.go:89] found id: ""
	I1217 12:05:37.289620 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.289629 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:37.289635 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:37.289707 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:37.324903 3219848 cri.go:89] found id: ""
	I1217 12:05:37.324923 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.324932 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:37.324939 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:37.324997 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:37.361173 3219848 cri.go:89] found id: ""
	I1217 12:05:37.361194 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.361204 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:37.361210 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:37.361269 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:37.389438 3219848 cri.go:89] found id: ""
	I1217 12:05:37.389461 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.389470 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:37.389476 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:37.389537 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:37.414662 3219848 cri.go:89] found id: ""
	I1217 12:05:37.414700 3219848 logs.go:282] 0 containers: []
	W1217 12:05:37.414710 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:37.414719 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:37.414731 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:37.478614 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:37.471157   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.471613   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.473091   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.473470   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.474881   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:37.471157   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.471613   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.473091   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.473470   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:37.474881   10431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:37.478647 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:37.478661 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:37.504204 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:37.504241 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:37.535207 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:37.535282 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:37.594334 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:37.594382 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:40.110392 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:40.122282 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:40.122363 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:40.148146 3219848 cri.go:89] found id: ""
	I1217 12:05:40.148171 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.148180 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:40.148186 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:40.148248 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:40.175122 3219848 cri.go:89] found id: ""
	I1217 12:05:40.175149 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.175158 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:40.175164 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:40.175224 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:40.201606 3219848 cri.go:89] found id: ""
	I1217 12:05:40.201629 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.201638 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:40.201644 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:40.201702 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:40.227663 3219848 cri.go:89] found id: ""
	I1217 12:05:40.227688 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.227697 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:40.227704 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:40.227760 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:40.279855 3219848 cri.go:89] found id: ""
	I1217 12:05:40.279881 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.279889 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:40.279896 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:40.279955 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:40.341349 3219848 cri.go:89] found id: ""
	I1217 12:05:40.341372 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.341381 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:40.341388 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:40.341445 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:40.366250 3219848 cri.go:89] found id: ""
	I1217 12:05:40.366276 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.366285 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:40.366292 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:40.366374 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:40.390064 3219848 cri.go:89] found id: ""
	I1217 12:05:40.390091 3219848 logs.go:282] 0 containers: []
	W1217 12:05:40.390100 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:40.390112 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:40.390143 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:40.417840 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:40.417866 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:40.474223 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:40.474260 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:40.489995 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:40.490025 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:40.558792 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:40.550389   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.551049   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.552779   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.553286   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.554780   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:40.550389   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.551049   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.552779   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.553286   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:40.554780   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:40.558816 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:40.558829 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:43.085654 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:43.096719 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:43.096788 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:43.122755 3219848 cri.go:89] found id: ""
	I1217 12:05:43.122822 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.122846 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:43.122862 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:43.122942 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:43.149072 3219848 cri.go:89] found id: ""
	I1217 12:05:43.149097 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.149106 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:43.149113 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:43.149192 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:43.175863 3219848 cri.go:89] found id: ""
	I1217 12:05:43.175889 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.175897 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:43.175929 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:43.176015 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:43.202533 3219848 cri.go:89] found id: ""
	I1217 12:05:43.202572 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.202580 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:43.202587 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:43.202649 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:43.227233 3219848 cri.go:89] found id: ""
	I1217 12:05:43.227307 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.227331 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:43.227352 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:43.227449 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:43.293609 3219848 cri.go:89] found id: ""
	I1217 12:05:43.293677 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.293701 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:43.293723 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:43.293807 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:43.345463 3219848 cri.go:89] found id: ""
	I1217 12:05:43.345537 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.345563 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:43.345584 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:43.345692 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:43.376719 3219848 cri.go:89] found id: ""
	I1217 12:05:43.376754 3219848 logs.go:282] 0 containers: []
	W1217 12:05:43.376763 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:43.376772 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:43.376785 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:43.434376 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:43.434408 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:43.449996 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:43.450023 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:43.518159 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:43.509135   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.509732   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.511431   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.511909   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.513376   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:43.509135   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.509732   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.511431   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.511909   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:43.513376   10662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:43.518179 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:43.518193 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:43.544448 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:43.544487 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:46.079862 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:46.091017 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:46.091085 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:46.116886 3219848 cri.go:89] found id: ""
	I1217 12:05:46.116913 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.116924 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:46.116939 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:46.117008 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:46.142188 3219848 cri.go:89] found id: ""
	I1217 12:05:46.142216 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.142227 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:46.142234 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:46.142296 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:46.168033 3219848 cri.go:89] found id: ""
	I1217 12:05:46.168059 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.168068 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:46.168075 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:46.168141 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:46.194149 3219848 cri.go:89] found id: ""
	I1217 12:05:46.194178 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.194188 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:46.194197 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:46.194257 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:46.220319 3219848 cri.go:89] found id: ""
	I1217 12:05:46.220345 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.220354 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:46.220360 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:46.220456 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:46.246104 3219848 cri.go:89] found id: ""
	I1217 12:05:46.246131 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.246140 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:46.246147 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:46.246208 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:46.281496 3219848 cri.go:89] found id: ""
	I1217 12:05:46.281520 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.281528 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:46.281535 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:46.281597 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:46.327477 3219848 cri.go:89] found id: ""
	I1217 12:05:46.327558 3219848 logs.go:282] 0 containers: []
	W1217 12:05:46.327582 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:46.327625 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:46.327653 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:46.407413 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:46.407451 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:46.423419 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:46.423448 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:46.489920 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:46.481686   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.482194   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.483773   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.484191   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.485671   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:46.481686   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.482194   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.483773   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.484191   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:46.485671   10773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:46.489945 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:46.489959 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:46.516022 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:46.516061 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:49.045130 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:49.056135 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:49.056216 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:49.080547 3219848 cri.go:89] found id: ""
	I1217 12:05:49.080568 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.080577 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:49.080583 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:49.080645 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:49.106805 3219848 cri.go:89] found id: ""
	I1217 12:05:49.106834 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.106844 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:49.106850 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:49.106911 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:49.132478 3219848 cri.go:89] found id: ""
	I1217 12:05:49.132501 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.132509 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:49.132515 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:49.132579 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:49.159868 3219848 cri.go:89] found id: ""
	I1217 12:05:49.159896 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.159906 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:49.159912 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:49.159971 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:49.187789 3219848 cri.go:89] found id: ""
	I1217 12:05:49.187814 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.187835 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:49.187843 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:49.187902 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:49.213461 3219848 cri.go:89] found id: ""
	I1217 12:05:49.213489 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.213498 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:49.213505 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:49.213612 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:49.240191 3219848 cri.go:89] found id: ""
	I1217 12:05:49.240220 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.240229 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:49.240247 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:49.240343 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:49.295243 3219848 cri.go:89] found id: ""
	I1217 12:05:49.295291 3219848 logs.go:282] 0 containers: []
	W1217 12:05:49.295306 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:49.295319 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:49.295331 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:49.359872 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:49.359903 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:49.427963 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:49.428002 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:49.444788 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:49.444818 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:49.510631 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:49.502008   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.502410   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.504142   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.504867   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.506554   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:49.502008   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.502410   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.504142   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.504867   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:49.506554   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:49.510652 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:49.510663 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:52.036765 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:52.049010 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:52.049084 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:52.075472 3219848 cri.go:89] found id: ""
	I1217 12:05:52.075500 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.075510 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:52.075517 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:52.075582 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:52.105198 3219848 cri.go:89] found id: ""
	I1217 12:05:52.105222 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.105231 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:52.105238 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:52.105295 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:52.133404 3219848 cri.go:89] found id: ""
	I1217 12:05:52.133428 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.133439 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:52.133445 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:52.133507 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:52.158170 3219848 cri.go:89] found id: ""
	I1217 12:05:52.158195 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.158205 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:52.158212 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:52.158270 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:52.182679 3219848 cri.go:89] found id: ""
	I1217 12:05:52.182704 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.182713 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:52.182720 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:52.182778 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:52.211743 3219848 cri.go:89] found id: ""
	I1217 12:05:52.211769 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.211778 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:52.211785 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:52.211845 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:52.237892 3219848 cri.go:89] found id: ""
	I1217 12:05:52.237918 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.237927 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:52.237933 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:52.237990 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:52.281029 3219848 cri.go:89] found id: ""
	I1217 12:05:52.281055 3219848 logs.go:282] 0 containers: []
	W1217 12:05:52.281063 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:52.281073 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:52.281089 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:52.374683 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:52.374721 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:52.390831 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:52.390863 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:52.454058 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:52.444629   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.445414   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.447158   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.447874   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.449604   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:52.444629   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.445414   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.447158   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.447874   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:52.449604   11000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:52.454081 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:52.454095 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:52.479410 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:52.479443 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:55.007287 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:55.021703 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:55.021785 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:55.047053 3219848 cri.go:89] found id: ""
	I1217 12:05:55.047076 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.047085 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:55.047092 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:55.047149 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:55.074641 3219848 cri.go:89] found id: ""
	I1217 12:05:55.074665 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.074674 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:55.074680 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:55.074742 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:55.103484 3219848 cri.go:89] found id: ""
	I1217 12:05:55.103512 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.103521 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:55.103527 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:55.103586 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:55.132461 3219848 cri.go:89] found id: ""
	I1217 12:05:55.132487 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.132497 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:55.132503 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:55.132561 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:55.157595 3219848 cri.go:89] found id: ""
	I1217 12:05:55.157618 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.157626 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:55.157632 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:55.157694 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:55.187334 3219848 cri.go:89] found id: ""
	I1217 12:05:55.187354 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.187364 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:55.187371 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:55.187529 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:55.212469 3219848 cri.go:89] found id: ""
	I1217 12:05:55.212492 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.212501 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:55.212508 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:55.212567 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:55.238155 3219848 cri.go:89] found id: ""
	I1217 12:05:55.238188 3219848 logs.go:282] 0 containers: []
	W1217 12:05:55.238198 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:55.238208 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:55.238237 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:55.361507 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:55.352214   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.352982   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.354653   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.355179   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.356793   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:55.352214   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.352982   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.354653   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.355179   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:55.356793   11105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:55.361529 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:55.361542 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:55.387722 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:55.387760 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:05:55.415663 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:55.415688 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:55.471304 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:55.471342 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:57.988615 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:05:57.999088 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:05:57.999163 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:05:58.029910 3219848 cri.go:89] found id: ""
	I1217 12:05:58.029938 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.029948 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:05:58.029955 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:05:58.030021 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:05:58.056383 3219848 cri.go:89] found id: ""
	I1217 12:05:58.056409 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.056461 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:05:58.056468 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:05:58.056526 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:05:58.082442 3219848 cri.go:89] found id: ""
	I1217 12:05:58.082468 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.082477 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:05:58.082483 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:05:58.082543 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:05:58.110467 3219848 cri.go:89] found id: ""
	I1217 12:05:58.110491 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.110500 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:05:58.110507 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:05:58.110574 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:05:58.136852 3219848 cri.go:89] found id: ""
	I1217 12:05:58.136879 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.136888 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:05:58.136895 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:05:58.136976 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:05:58.163746 3219848 cri.go:89] found id: ""
	I1217 12:05:58.163772 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.163782 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:05:58.163788 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:05:58.163847 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:05:58.190425 3219848 cri.go:89] found id: ""
	I1217 12:05:58.190451 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.190460 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:05:58.190467 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:05:58.190529 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:05:58.220315 3219848 cri.go:89] found id: ""
	I1217 12:05:58.220338 3219848 logs.go:282] 0 containers: []
	W1217 12:05:58.220347 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:05:58.220358 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:05:58.220368 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:05:58.290204 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:05:58.290287 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:05:58.323039 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:05:58.323120 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:05:58.402482 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:05:58.393790   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.394615   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.396214   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.396884   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.398347   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:05:58.393790   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.394615   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.396214   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.396884   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:05:58.398347   11231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:05:58.402504 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:05:58.402521 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:05:58.428716 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:05:58.428754 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:00.959753 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:00.970910 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:00.970990 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:01.005870 3219848 cri.go:89] found id: ""
	I1217 12:06:01.005941 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.005958 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:01.005967 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:01.006031 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:01.034724 3219848 cri.go:89] found id: ""
	I1217 12:06:01.034747 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.034756 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:01.034765 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:01.034823 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:01.059798 3219848 cri.go:89] found id: ""
	I1217 12:06:01.059824 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.059836 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:01.059842 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:01.059900 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:01.089347 3219848 cri.go:89] found id: ""
	I1217 12:06:01.089370 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.089378 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:01.089385 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:01.089448 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:01.115166 3219848 cri.go:89] found id: ""
	I1217 12:06:01.115201 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.115211 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:01.115218 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:01.115286 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:01.142081 3219848 cri.go:89] found id: ""
	I1217 12:06:01.142109 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.142118 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:01.142125 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:01.142211 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:01.173172 3219848 cri.go:89] found id: ""
	I1217 12:06:01.173198 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.173208 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:01.173215 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:01.173280 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:01.200453 3219848 cri.go:89] found id: ""
	I1217 12:06:01.200477 3219848 logs.go:282] 0 containers: []
	W1217 12:06:01.200486 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:01.200496 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:01.200506 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:01.226189 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:01.226231 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:01.283020 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:01.283101 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:01.360095 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:01.360131 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:01.377017 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:01.377049 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:01.442041 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:01.434467   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.434821   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.436378   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.436785   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.438222   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:01.434467   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.434821   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.436378   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.436785   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:01.438222   11357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:03.943920 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:03.955271 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:03.955384 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:03.987078 3219848 cri.go:89] found id: ""
	I1217 12:06:03.987106 3219848 logs.go:282] 0 containers: []
	W1217 12:06:03.987115 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:03.987124 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:03.987185 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:04.020179 3219848 cri.go:89] found id: ""
	I1217 12:06:04.020207 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.020243 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:04.020250 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:04.020328 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:04.049457 3219848 cri.go:89] found id: ""
	I1217 12:06:04.049484 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.049494 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:04.049500 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:04.049565 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:04.077274 3219848 cri.go:89] found id: ""
	I1217 12:06:04.077302 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.077311 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:04.077318 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:04.077386 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:04.108697 3219848 cri.go:89] found id: ""
	I1217 12:06:04.108725 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.108734 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:04.108740 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:04.108800 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:04.133870 3219848 cri.go:89] found id: ""
	I1217 12:06:04.133949 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.133974 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:04.133988 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:04.134075 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:04.158589 3219848 cri.go:89] found id: ""
	I1217 12:06:04.158616 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.158625 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:04.158632 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:04.158705 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:04.182544 3219848 cri.go:89] found id: ""
	I1217 12:06:04.182568 3219848 logs.go:282] 0 containers: []
	W1217 12:06:04.182577 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:04.182605 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:04.182630 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:04.198694 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:04.198722 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:04.286551 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:04.273260   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.277107   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.278962   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.279268   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.280776   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:04.273260   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.277107   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.278962   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.279268   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:04.280776   11449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:04.286576 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:04.286587 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:04.322177 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:04.322211 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:04.362745 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:04.362774 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:06.922523 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:06.933191 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:06.933262 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:06.962648 3219848 cri.go:89] found id: ""
	I1217 12:06:06.962675 3219848 logs.go:282] 0 containers: []
	W1217 12:06:06.962685 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:06.962692 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:06.962750 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:06.991732 3219848 cri.go:89] found id: ""
	I1217 12:06:06.991757 3219848 logs.go:282] 0 containers: []
	W1217 12:06:06.991765 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:06.991772 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:06.991829 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:07.018557 3219848 cri.go:89] found id: ""
	I1217 12:06:07.018584 3219848 logs.go:282] 0 containers: []
	W1217 12:06:07.018594 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:07.018600 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:07.018659 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:07.044679 3219848 cri.go:89] found id: ""
	I1217 12:06:07.044704 3219848 logs.go:282] 0 containers: []
	W1217 12:06:07.044713 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:07.044720 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:07.044786 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:07.073836 3219848 cri.go:89] found id: ""
	I1217 12:06:07.073905 3219848 logs.go:282] 0 containers: []
	W1217 12:06:07.073930 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:07.073944 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:07.074020 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:07.100945 3219848 cri.go:89] found id: ""
	I1217 12:06:07.100972 3219848 logs.go:282] 0 containers: []
	W1217 12:06:07.100982 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:07.100989 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:07.101094 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:07.125935 3219848 cri.go:89] found id: ""
	I1217 12:06:07.125963 3219848 logs.go:282] 0 containers: []
	W1217 12:06:07.125972 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:07.125978 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:07.126061 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:07.151599 3219848 cri.go:89] found id: ""
	I1217 12:06:07.151624 3219848 logs.go:282] 0 containers: []
	W1217 12:06:07.151633 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:07.151641 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:07.151653 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:07.167414 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:07.167439 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:07.235174 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:07.226345   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.226997   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.228627   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.229347   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.231069   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:07.226345   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.226997   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.228627   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.229347   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:07.231069   11566 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:07.235246 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:07.235266 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:07.264720 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:07.264754 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:07.349181 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:07.349210 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:09.906484 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:09.917044 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:09.917120 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:09.941939 3219848 cri.go:89] found id: ""
	I1217 12:06:09.942004 3219848 logs.go:282] 0 containers: []
	W1217 12:06:09.942024 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:09.942031 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:09.942088 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:09.966481 3219848 cri.go:89] found id: ""
	I1217 12:06:09.966507 3219848 logs.go:282] 0 containers: []
	W1217 12:06:09.966515 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:09.966523 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:09.966622 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:09.991806 3219848 cri.go:89] found id: ""
	I1217 12:06:09.991830 3219848 logs.go:282] 0 containers: []
	W1217 12:06:09.991839 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:09.991845 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:09.991901 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:10.027713 3219848 cri.go:89] found id: ""
	I1217 12:06:10.027784 3219848 logs.go:282] 0 containers: []
	W1217 12:06:10.027800 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:10.027808 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:10.027874 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:10.060097 3219848 cri.go:89] found id: ""
	I1217 12:06:10.060124 3219848 logs.go:282] 0 containers: []
	W1217 12:06:10.060133 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:10.060140 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:10.060203 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:10.091977 3219848 cri.go:89] found id: ""
	I1217 12:06:10.092002 3219848 logs.go:282] 0 containers: []
	W1217 12:06:10.092010 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:10.092018 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:10.092081 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:10.118481 3219848 cri.go:89] found id: ""
	I1217 12:06:10.118504 3219848 logs.go:282] 0 containers: []
	W1217 12:06:10.118513 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:10.118526 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:10.118586 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:10.145196 3219848 cri.go:89] found id: ""
	I1217 12:06:10.145263 3219848 logs.go:282] 0 containers: []
	W1217 12:06:10.145278 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:10.145288 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:10.145306 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:10.161573 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:10.161603 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:10.227235 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:10.218460   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.219270   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.220964   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.221573   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.223258   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:10.218460   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.219270   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.220964   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.221573   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:10.223258   11679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:10.227259 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:10.227273 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:10.253333 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:10.253644 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:10.302209 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:10.302284 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:12.881891 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:12.892449 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:12.892519 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:12.919824 3219848 cri.go:89] found id: ""
	I1217 12:06:12.919848 3219848 logs.go:282] 0 containers: []
	W1217 12:06:12.919856 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:12.919863 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:12.919924 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:12.946684 3219848 cri.go:89] found id: ""
	I1217 12:06:12.946711 3219848 logs.go:282] 0 containers: []
	W1217 12:06:12.946721 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:12.946728 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:12.946808 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:12.970796 3219848 cri.go:89] found id: ""
	I1217 12:06:12.970820 3219848 logs.go:282] 0 containers: []
	W1217 12:06:12.970830 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:12.970837 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:12.970904 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:12.996393 3219848 cri.go:89] found id: ""
	I1217 12:06:12.996459 3219848 logs.go:282] 0 containers: []
	W1217 12:06:12.996469 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:12.996476 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:12.996538 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:13.022560 3219848 cri.go:89] found id: ""
	I1217 12:06:13.022587 3219848 logs.go:282] 0 containers: []
	W1217 12:06:13.022596 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:13.022603 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:13.022664 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:13.050809 3219848 cri.go:89] found id: ""
	I1217 12:06:13.050839 3219848 logs.go:282] 0 containers: []
	W1217 12:06:13.050849 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:13.050856 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:13.050919 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:13.077432 3219848 cri.go:89] found id: ""
	I1217 12:06:13.077460 3219848 logs.go:282] 0 containers: []
	W1217 12:06:13.077469 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:13.077477 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:13.077540 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:13.104029 3219848 cri.go:89] found id: ""
	I1217 12:06:13.104056 3219848 logs.go:282] 0 containers: []
	W1217 12:06:13.104065 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:13.104075 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:13.104086 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:13.162000 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:13.162038 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:13.177865 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:13.177891 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:13.241266 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:13.232767   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.233565   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.235109   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.235417   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.236871   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:13.232767   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.233565   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.235109   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.235417   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:13.236871   11792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:13.241289 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:13.241302 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:13.271232 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:13.271269 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:15.839567 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:15.850326 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:15.850396 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:15.875471 3219848 cri.go:89] found id: ""
	I1217 12:06:15.875493 3219848 logs.go:282] 0 containers: []
	W1217 12:06:15.875502 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:15.875509 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:15.875566 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:15.899977 3219848 cri.go:89] found id: ""
	I1217 12:06:15.899998 3219848 logs.go:282] 0 containers: []
	W1217 12:06:15.900007 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:15.900013 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:15.900073 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:15.926093 3219848 cri.go:89] found id: ""
	I1217 12:06:15.926117 3219848 logs.go:282] 0 containers: []
	W1217 12:06:15.926126 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:15.926133 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:15.926193 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:15.951373 3219848 cri.go:89] found id: ""
	I1217 12:06:15.951397 3219848 logs.go:282] 0 containers: []
	W1217 12:06:15.951407 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:15.951414 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:15.951470 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:15.976937 3219848 cri.go:89] found id: ""
	I1217 12:06:15.976963 3219848 logs.go:282] 0 containers: []
	W1217 12:06:15.976972 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:15.976979 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:15.977041 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:16.003518 3219848 cri.go:89] found id: ""
	I1217 12:06:16.003717 3219848 logs.go:282] 0 containers: []
	W1217 12:06:16.003750 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:16.003786 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:16.003901 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:16.032115 3219848 cri.go:89] found id: ""
	I1217 12:06:16.032142 3219848 logs.go:282] 0 containers: []
	W1217 12:06:16.032151 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:16.032159 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:16.032219 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:16.061490 3219848 cri.go:89] found id: ""
	I1217 12:06:16.061517 3219848 logs.go:282] 0 containers: []
	W1217 12:06:16.061526 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:16.061536 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:16.061547 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:16.077146 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:16.077179 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:16.145955 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:16.137946   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.138559   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.140379   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.140910   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.142053   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:16.137946   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.138559   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.140379   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.140910   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:16.142053   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:16.145981 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:16.145995 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:16.172145 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:16.172180 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:16.206805 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:16.206833 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:18.766689 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:18.777034 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:18.777108 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:18.805815 3219848 cri.go:89] found id: ""
	I1217 12:06:18.805838 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.805847 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:18.805853 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:18.805910 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:18.831468 3219848 cri.go:89] found id: ""
	I1217 12:06:18.831492 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.831501 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:18.831508 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:18.831567 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:18.859309 3219848 cri.go:89] found id: ""
	I1217 12:06:18.859339 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.859349 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:18.859368 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:18.859436 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:18.884524 3219848 cri.go:89] found id: ""
	I1217 12:06:18.884552 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.884561 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:18.884569 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:18.884665 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:18.909522 3219848 cri.go:89] found id: ""
	I1217 12:06:18.909545 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.909554 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:18.909561 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:18.909620 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:18.935126 3219848 cri.go:89] found id: ""
	I1217 12:06:18.935151 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.935161 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:18.935167 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:18.935227 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:18.964480 3219848 cri.go:89] found id: ""
	I1217 12:06:18.964506 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.964516 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:18.964522 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:18.964581 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:18.990408 3219848 cri.go:89] found id: ""
	I1217 12:06:18.990435 3219848 logs.go:282] 0 containers: []
	W1217 12:06:18.990444 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:18.990454 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:18.990466 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:19.017937 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:19.017974 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:19.048976 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:19.049004 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:19.108146 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:19.108184 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:19.125457 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:19.125507 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:19.190960 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:19.182754   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.183274   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.185018   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.185416   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.186923   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:19.182754   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.183274   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.185018   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.185416   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:19.186923   12033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:21.691321 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:21.702288 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:21.702373 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:21.728533 3219848 cri.go:89] found id: ""
	I1217 12:06:21.728561 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.728571 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:21.728577 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:21.728645 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:21.755298 3219848 cri.go:89] found id: ""
	I1217 12:06:21.755323 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.755333 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:21.755345 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:21.755403 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:21.784470 3219848 cri.go:89] found id: ""
	I1217 12:06:21.784494 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.784503 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:21.784509 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:21.784568 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:21.811503 3219848 cri.go:89] found id: ""
	I1217 12:06:21.811528 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.811538 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:21.811544 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:21.811602 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:21.841147 3219848 cri.go:89] found id: ""
	I1217 12:06:21.841212 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.841227 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:21.841241 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:21.841303 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:21.867736 3219848 cri.go:89] found id: ""
	I1217 12:06:21.867763 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.867773 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:21.867779 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:21.867847 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:21.897039 3219848 cri.go:89] found id: ""
	I1217 12:06:21.897104 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.897121 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:21.897128 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:21.897187 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:21.922398 3219848 cri.go:89] found id: ""
	I1217 12:06:21.922420 3219848 logs.go:282] 0 containers: []
	W1217 12:06:21.922429 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:21.922438 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:21.922449 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:21.980203 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:21.980241 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:21.996482 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:21.996513 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:22.074426 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:22.061574   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.062326   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.064118   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.064738   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.070487   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:22.061574   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.062326   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.064118   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.064738   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:22.070487   12135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:22.074474 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:22.074488 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:22.101174 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:22.101210 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:24.630003 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:24.640702 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:24.640773 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:24.666366 3219848 cri.go:89] found id: ""
	I1217 12:06:24.666390 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.666399 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:24.666408 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:24.666465 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:24.693372 3219848 cri.go:89] found id: ""
	I1217 12:06:24.693398 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.693407 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:24.693413 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:24.693478 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:24.723159 3219848 cri.go:89] found id: ""
	I1217 12:06:24.723181 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.723190 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:24.723197 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:24.723264 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:24.747933 3219848 cri.go:89] found id: ""
	I1217 12:06:24.747960 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.747969 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:24.747976 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:24.748044 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:24.774083 3219848 cri.go:89] found id: ""
	I1217 12:06:24.774105 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.774113 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:24.774120 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:24.774186 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:24.808050 3219848 cri.go:89] found id: ""
	I1217 12:06:24.808076 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.808085 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:24.808092 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:24.808200 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:24.833993 3219848 cri.go:89] found id: ""
	I1217 12:06:24.834070 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.834085 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:24.834093 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:24.834153 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:24.860654 3219848 cri.go:89] found id: ""
	I1217 12:06:24.860679 3219848 logs.go:282] 0 containers: []
	W1217 12:06:24.860688 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:24.860697 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:24.860708 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:24.917182 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:24.917265 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:24.933462 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:24.933491 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:25.002903 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:24.992978   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.993789   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.995410   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.996068   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.997870   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:24.992978   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.993789   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.995410   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.996068   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:24.997870   12248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:25.002927 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:25.002960 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:25.031774 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:25.031809 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:27.560620 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:27.575695 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:27.575766 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:27.603395 3219848 cri.go:89] found id: ""
	I1217 12:06:27.603421 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.603430 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:27.603436 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:27.603498 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:27.628716 3219848 cri.go:89] found id: ""
	I1217 12:06:27.628739 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.628747 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:27.628754 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:27.628810 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:27.653566 3219848 cri.go:89] found id: ""
	I1217 12:06:27.653629 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.653653 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:27.653679 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:27.653756 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:27.679125 3219848 cri.go:89] found id: ""
	I1217 12:06:27.679150 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.679159 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:27.679166 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:27.679245 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:27.705566 3219848 cri.go:89] found id: ""
	I1217 12:06:27.705632 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.705656 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:27.705677 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:27.705762 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:27.730473 3219848 cri.go:89] found id: ""
	I1217 12:06:27.730541 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.730556 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:27.730564 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:27.730639 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:27.755451 3219848 cri.go:89] found id: ""
	I1217 12:06:27.755476 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.755485 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:27.755492 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:27.755552 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:27.783637 3219848 cri.go:89] found id: ""
	I1217 12:06:27.783663 3219848 logs.go:282] 0 containers: []
	W1217 12:06:27.783673 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:27.783682 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:27.783693 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:27.815668 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:27.815707 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:27.846761 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:27.846788 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:27.903961 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:27.903992 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:27.920251 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:27.920285 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:27.989986 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:27.982512   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.983471   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.984453   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.984910   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.985983   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:27.982512   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.983471   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.984453   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.984910   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:27.985983   12375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:30.490267 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:30.501854 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:30.501936 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:30.577313 3219848 cri.go:89] found id: ""
	I1217 12:06:30.577342 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.577352 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:30.577376 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:30.577460 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:30.606634 3219848 cri.go:89] found id: ""
	I1217 12:06:30.606660 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.606670 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:30.606676 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:30.606744 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:30.632310 3219848 cri.go:89] found id: ""
	I1217 12:06:30.632342 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.632351 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:30.632358 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:30.632473 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:30.658929 3219848 cri.go:89] found id: ""
	I1217 12:06:30.658960 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.658970 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:30.658976 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:30.659036 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:30.690494 3219848 cri.go:89] found id: ""
	I1217 12:06:30.690519 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.690529 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:30.690535 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:30.690598 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:30.716270 3219848 cri.go:89] found id: ""
	I1217 12:06:30.716295 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.716305 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:30.716312 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:30.716396 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:30.743684 3219848 cri.go:89] found id: ""
	I1217 12:06:30.743720 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.743738 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:30.743745 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:30.743823 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:30.771862 3219848 cri.go:89] found id: ""
	I1217 12:06:30.771895 3219848 logs.go:282] 0 containers: []
	W1217 12:06:30.771905 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:30.771915 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:30.771928 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:30.829962 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:30.829997 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:30.846244 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:30.846269 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:30.910789 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:30.902355   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.903184   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.904920   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.905376   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.906932   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:30.902355   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.903184   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.904920   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.905376   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:30.906932   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:30.910812 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:30.910825 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:30.937515 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:30.937552 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:33.467661 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:33.479263 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:33.479335 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:33.531382 3219848 cri.go:89] found id: ""
	I1217 12:06:33.531405 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.531414 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:33.531420 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:33.531491 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:33.586607 3219848 cri.go:89] found id: ""
	I1217 12:06:33.586628 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.586637 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:33.586651 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:33.586708 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:33.622903 3219848 cri.go:89] found id: ""
	I1217 12:06:33.622925 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.622934 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:33.622940 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:33.623012 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:33.652846 3219848 cri.go:89] found id: ""
	I1217 12:06:33.652874 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.652882 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:33.652889 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:33.652946 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:33.677852 3219848 cri.go:89] found id: ""
	I1217 12:06:33.677877 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.677886 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:33.677893 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:33.677972 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:33.706815 3219848 cri.go:89] found id: ""
	I1217 12:06:33.706840 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.706849 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:33.706856 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:33.706918 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:33.736780 3219848 cri.go:89] found id: ""
	I1217 12:06:33.736806 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.736816 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:33.736822 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:33.736880 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:33.761376 3219848 cri.go:89] found id: ""
	I1217 12:06:33.761414 3219848 logs.go:282] 0 containers: []
	W1217 12:06:33.761424 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:33.761433 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:33.761445 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:33.819076 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:33.819113 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:33.835282 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:33.835311 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:33.903109 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:33.894518   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.895131   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.896856   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.897422   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.899092   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:33.894518   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.895131   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.896856   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.897422   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:33.899092   12592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:33.903181 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:33.903206 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:33.935593 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:33.935636 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:36.469816 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:36.480311 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:36.480394 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:36.522000 3219848 cri.go:89] found id: ""
	I1217 12:06:36.522026 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.522035 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:36.522041 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:36.522098 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:36.584783 3219848 cri.go:89] found id: ""
	I1217 12:06:36.584811 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.584819 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:36.584825 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:36.584885 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:36.614443 3219848 cri.go:89] found id: ""
	I1217 12:06:36.614469 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.614478 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:36.614484 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:36.614543 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:36.642952 3219848 cri.go:89] found id: ""
	I1217 12:06:36.642974 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.642982 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:36.642989 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:36.643047 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:36.667989 3219848 cri.go:89] found id: ""
	I1217 12:06:36.668011 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.668019 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:36.668025 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:36.668109 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:36.696974 3219848 cri.go:89] found id: ""
	I1217 12:06:36.697049 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.697062 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:36.697096 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:36.697191 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:36.723789 3219848 cri.go:89] found id: ""
	I1217 12:06:36.723812 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.723821 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:36.723828 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:36.723885 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:36.748007 3219848 cri.go:89] found id: ""
	I1217 12:06:36.748078 3219848 logs.go:282] 0 containers: []
	W1217 12:06:36.748102 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:36.748126 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:36.748167 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:36.778526 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:36.778554 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:36.834614 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:36.834648 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:36.852247 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:36.852276 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:36.920099 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:36.911022   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.911723   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.913310   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.913643   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.915127   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:36.911022   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.911723   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.913310   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.913643   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:36.915127   12717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:36.920123 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:36.920135 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:39.447091 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:39.457670 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:39.457740 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:39.481236 3219848 cri.go:89] found id: ""
	I1217 12:06:39.481260 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.481269 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:39.481276 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:39.481333 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:39.539773 3219848 cri.go:89] found id: ""
	I1217 12:06:39.539800 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.539810 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:39.539817 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:39.539879 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:39.586024 3219848 cri.go:89] found id: ""
	I1217 12:06:39.586053 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.586069 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:39.586075 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:39.586133 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:39.614247 3219848 cri.go:89] found id: ""
	I1217 12:06:39.614272 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.614281 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:39.614288 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:39.614348 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:39.639817 3219848 cri.go:89] found id: ""
	I1217 12:06:39.639840 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.639848 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:39.639855 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:39.639910 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:39.663356 3219848 cri.go:89] found id: ""
	I1217 12:06:39.663382 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.663390 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:39.663397 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:39.663457 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:39.692611 3219848 cri.go:89] found id: ""
	I1217 12:06:39.692638 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.692647 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:39.692654 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:39.692714 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:39.718640 3219848 cri.go:89] found id: ""
	I1217 12:06:39.718665 3219848 logs.go:282] 0 containers: []
	W1217 12:06:39.718674 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:39.718686 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:39.718698 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:39.743735 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:39.743776 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:39.776101 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:39.776130 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:39.839871 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:39.839912 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:39.856925 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:39.856956 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:39.927715 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:39.920216   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.920790   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.921971   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.922477   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.923988   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:39.920216   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.920790   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.921971   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.922477   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:39.923988   12833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:42.428378 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:42.439785 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:42.439861 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:42.467826 3219848 cri.go:89] found id: ""
	I1217 12:06:42.467849 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.467857 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:42.467864 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:42.467928 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:42.492505 3219848 cri.go:89] found id: ""
	I1217 12:06:42.492533 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.492542 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:42.492549 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:42.492607 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:42.562039 3219848 cri.go:89] found id: ""
	I1217 12:06:42.562062 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.562071 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:42.562077 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:42.562147 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:42.600111 3219848 cri.go:89] found id: ""
	I1217 12:06:42.600139 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.600148 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:42.600155 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:42.600218 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:42.631003 3219848 cri.go:89] found id: ""
	I1217 12:06:42.631026 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.631035 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:42.631042 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:42.631101 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:42.655257 3219848 cri.go:89] found id: ""
	I1217 12:06:42.655283 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.655292 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:42.655305 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:42.655366 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:42.681199 3219848 cri.go:89] found id: ""
	I1217 12:06:42.681220 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.681229 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:42.681236 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:42.681295 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:42.706511 3219848 cri.go:89] found id: ""
	I1217 12:06:42.706535 3219848 logs.go:282] 0 containers: []
	W1217 12:06:42.706544 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:42.706553 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:42.706565 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:42.762839 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:42.762875 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:42.779904 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:42.779936 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:42.849079 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:42.840586   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.841187   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.842724   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.843182   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.844615   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:42.840586   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.841187   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.842724   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.843182   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:42.844615   12933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:42.849103 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:42.849114 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:42.874488 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:42.874529 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:45.406478 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:45.417919 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:45.417989 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:45.445578 3219848 cri.go:89] found id: ""
	I1217 12:06:45.445614 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.445624 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:45.445632 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:45.445694 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:45.477590 3219848 cri.go:89] found id: ""
	I1217 12:06:45.477674 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.477699 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:45.477735 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:45.477831 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:45.515743 3219848 cri.go:89] found id: ""
	I1217 12:06:45.515765 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.515774 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:45.515781 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:45.515840 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:45.550588 3219848 cri.go:89] found id: ""
	I1217 12:06:45.550610 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.550619 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:45.550626 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:45.550684 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:45.595764 3219848 cri.go:89] found id: ""
	I1217 12:06:45.595785 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.595794 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:45.595802 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:45.595862 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:45.621971 3219848 cri.go:89] found id: ""
	I1217 12:06:45.621994 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.622003 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:45.622010 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:45.622077 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:45.648142 3219848 cri.go:89] found id: ""
	I1217 12:06:45.648176 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.648186 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:45.648193 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:45.648266 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:45.677328 3219848 cri.go:89] found id: ""
	I1217 12:06:45.677364 3219848 logs.go:282] 0 containers: []
	W1217 12:06:45.677373 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:45.677383 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:45.677401 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:45.750976 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:45.739342   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.739999   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.744563   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.745136   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.746629   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:45.739342   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.739999   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.744563   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.745136   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:45.746629   13040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:45.750998 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:45.751012 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:45.777019 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:45.777056 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:45.805927 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:45.805957 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:45.861380 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:45.861414 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:48.377400 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:48.388086 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:48.388158 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:48.412282 3219848 cri.go:89] found id: ""
	I1217 12:06:48.412305 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.412313 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:48.412320 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:48.412377 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:48.437811 3219848 cri.go:89] found id: ""
	I1217 12:06:48.437846 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.437856 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:48.437879 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:48.437953 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:48.462517 3219848 cri.go:89] found id: ""
	I1217 12:06:48.462539 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.462547 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:48.462557 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:48.462615 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:48.486379 3219848 cri.go:89] found id: ""
	I1217 12:06:48.486402 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.486411 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:48.486418 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:48.486475 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:48.582544 3219848 cri.go:89] found id: ""
	I1217 12:06:48.582569 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.582578 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:48.582585 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:48.582691 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:48.612954 3219848 cri.go:89] found id: ""
	I1217 12:06:48.612980 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.612990 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:48.612997 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:48.613058 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:48.638059 3219848 cri.go:89] found id: ""
	I1217 12:06:48.638083 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.638091 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:48.638098 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:48.638160 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:48.663252 3219848 cri.go:89] found id: ""
	I1217 12:06:48.663278 3219848 logs.go:282] 0 containers: []
	W1217 12:06:48.663288 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:48.663298 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:48.663308 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:48.719388 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:48.719422 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:48.735198 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:48.735227 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:48.801972 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:48.793731   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.794319   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.795999   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.796684   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.798278   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:48.793731   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.794319   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.795999   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.796684   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:48.798278   13159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:48.801995 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:48.802008 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:48.827753 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:48.827787 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:51.362888 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:51.373695 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:06:51.373779 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:06:51.399521 3219848 cri.go:89] found id: ""
	I1217 12:06:51.399547 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.399556 3219848 logs.go:284] No container was found matching "kube-apiserver"
	I1217 12:06:51.399563 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1217 12:06:51.399620 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:06:51.425074 3219848 cri.go:89] found id: ""
	I1217 12:06:51.425140 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.425154 3219848 logs.go:284] No container was found matching "etcd"
	I1217 12:06:51.425161 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1217 12:06:51.425219 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:06:51.449708 3219848 cri.go:89] found id: ""
	I1217 12:06:51.449731 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.449740 3219848 logs.go:284] No container was found matching "coredns"
	I1217 12:06:51.449746 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:06:51.449818 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:06:51.478561 3219848 cri.go:89] found id: ""
	I1217 12:06:51.478585 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.478594 3219848 logs.go:284] No container was found matching "kube-scheduler"
	I1217 12:06:51.478601 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:06:51.478687 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:06:51.520104 3219848 cri.go:89] found id: ""
	I1217 12:06:51.520142 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.520152 3219848 logs.go:284] No container was found matching "kube-proxy"
	I1217 12:06:51.520159 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:06:51.520227 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:06:51.589783 3219848 cri.go:89] found id: ""
	I1217 12:06:51.589826 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.589836 3219848 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 12:06:51.589843 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1217 12:06:51.589914 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:06:51.616852 3219848 cri.go:89] found id: ""
	I1217 12:06:51.616888 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.616898 3219848 logs.go:284] No container was found matching "kindnet"
	I1217 12:06:51.616904 3219848 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1217 12:06:51.616967 3219848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1217 12:06:51.643529 3219848 cri.go:89] found id: ""
	I1217 12:06:51.643609 3219848 logs.go:282] 0 containers: []
	W1217 12:06:51.643632 3219848 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 12:06:51.643661 3219848 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:06:51.643706 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:06:51.707671 3219848 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:06:51.699393   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.700178   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.701673   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.702158   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.703665   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 12:06:51.699393   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.700178   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.701673   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.702158   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:06:51.703665   13266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:06:51.707744 3219848 logs.go:123] Gathering logs for containerd ...
	I1217 12:06:51.707772 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1217 12:06:51.733586 3219848 logs.go:123] Gathering logs for container status ...
	I1217 12:06:51.733622 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:06:51.763883 3219848 logs.go:123] Gathering logs for kubelet ...
	I1217 12:06:51.763912 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:06:51.818754 3219848 logs.go:123] Gathering logs for dmesg ...
	I1217 12:06:51.818788 3219848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:06:54.336140 3219848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:06:54.350294 3219848 out.go:203] 
	W1217 12:06:54.353246 3219848 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1217 12:06:54.353303 3219848 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1217 12:06:54.353317 3219848 out.go:285] * Related issues:
	W1217 12:06:54.353339 3219848 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1217 12:06:54.353354 3219848 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1217 12:06:54.356285 3219848 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.201958753Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.201978051Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202016040Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202033845Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202043395Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202054242Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202063719Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202075034Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202091206Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202122163Z" level=info msg="Connect containerd service"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202376764Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.202915340Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.221759735Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.221831644Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.221883507Z" level=info msg="Start subscribing containerd event"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.221927577Z" level=info msg="Start recovering state"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.261428629Z" level=info msg="Start event monitor"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.261488361Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.261499979Z" level=info msg="Start streaming server"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.261510449Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.261519753Z" level=info msg="runtime interface starting up..."
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.261526965Z" level=info msg="starting plugins..."
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.261557275Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 12:00:50 newest-cni-669680 containerd[554]: time="2025-12-17T12:00:50.261851842Z" level=info msg="containerd successfully booted in 0.083557s"
	Dec 17 12:00:50 newest-cni-669680 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:07:07.620691   13942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:07:07.621390   13942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:07:07.622916   13942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:07:07.623558   13942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:07:07.625387   13942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +26.532481] overlayfs: idmapped layers are currently not supported
	[Dec17 09:26] overlayfs: idmapped layers are currently not supported
	[Dec17 09:27] overlayfs: idmapped layers are currently not supported
	[Dec17 09:29] overlayfs: idmapped layers are currently not supported
	[Dec17 09:31] overlayfs: idmapped layers are currently not supported
	[Dec17 09:41] overlayfs: idmapped layers are currently not supported
	[Dec17 09:43] overlayfs: idmapped layers are currently not supported
	[Dec17 09:44] overlayfs: idmapped layers are currently not supported
	[  +5.066669] overlayfs: idmapped layers are currently not supported
	[ +38.827173] overlayfs: idmapped layers are currently not supported
	[Dec17 09:45] overlayfs: idmapped layers are currently not supported
	[Dec17 09:46] overlayfs: idmapped layers are currently not supported
	[Dec17 09:48] overlayfs: idmapped layers are currently not supported
	[  +5.468161] overlayfs: idmapped layers are currently not supported
	[Dec17 09:49] overlayfs: idmapped layers are currently not supported
	[  +4.263444] overlayfs: idmapped layers are currently not supported
	[Dec17 09:50] overlayfs: idmapped layers are currently not supported
	[Dec17 10:07] overlayfs: idmapped layers are currently not supported
	[Dec17 10:08] overlayfs: idmapped layers are currently not supported
	[Dec17 10:10] overlayfs: idmapped layers are currently not supported
	[Dec17 10:11] overlayfs: idmapped layers are currently not supported
	[Dec17 10:13] overlayfs: idmapped layers are currently not supported
	[Dec17 10:15] overlayfs: idmapped layers are currently not supported
	[Dec17 10:16] overlayfs: idmapped layers are currently not supported
	[Dec17 10:21] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 12:07:07 up 17:49,  0 user,  load average: 1.43, 0.91, 1.16
	Linux newest-cni-669680 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 12:07:04 newest-cni-669680 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:07:05 newest-cni-669680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
	Dec 17 12:07:05 newest-cni-669680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:07:05 newest-cni-669680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:07:05 newest-cni-669680 kubelet[13804]: E1217 12:07:05.320011   13804 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:07:05 newest-cni-669680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:07:05 newest-cni-669680 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:07:05 newest-cni-669680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
	Dec 17 12:07:05 newest-cni-669680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:07:05 newest-cni-669680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:07:06 newest-cni-669680 kubelet[13831]: E1217 12:07:06.060190   13831 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:07:06 newest-cni-669680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:07:06 newest-cni-669680 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:07:06 newest-cni-669680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
	Dec 17 12:07:06 newest-cni-669680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:07:06 newest-cni-669680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:07:06 newest-cni-669680 kubelet[13845]: E1217 12:07:06.809501   13845 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:07:06 newest-cni-669680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:07:06 newest-cni-669680 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:07:07 newest-cni-669680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8.
	Dec 17 12:07:07 newest-cni-669680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:07:07 newest-cni-669680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:07:07 newest-cni-669680 kubelet[13930]: E1217 12:07:07.567110   13930 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:07:07 newest-cni-669680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:07:07 newest-cni-669680 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-669680 -n newest-cni-669680
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-669680 -n newest-cni-669680: exit status 2 (346.005707ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-669680" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (9.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (257.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1217 12:12:45.645421 2924574 config.go:182] Loaded profile config "custom-flannel-348887": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:13:04.223058 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/auto-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:13:04.229725 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/auto-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:13:04.241064 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/auto-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:13:04.262489 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/auto-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:13:04.303957 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/auto-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:13:04.385179 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/auto-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:13:04.547265 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/auto-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:13:04.869259 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/auto-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:13:05.510995 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/auto-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:13:06.793083 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/auto-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:13:09.355077 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/auto-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:13:14.476822 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/auto-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:13:24.718847 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/auto-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:13:36.152650 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:13:45.201308 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/auto-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:13:50.515787 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/old-k8s-version-112124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:14:03.818567 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/default-k8s-diff-port-224095/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:14:26.163460 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/auto-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:14:28.205805 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:14:29.439162 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kindnet-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:14:29.445581 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kindnet-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:14:29.457817 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kindnet-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:14:29.480047 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kindnet-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:14:29.521387 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kindnet-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:14:29.602633 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kindnet-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:14:29.764703 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kindnet-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:14:30.728920 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kindnet-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:14:32.010723 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kindnet-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:14:34.572333 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kindnet-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:14:39.694241 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kindnet-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:14:49.936307 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kindnet-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1217 12:15:10.417600 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kindnet-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-118262 -n no-preload-118262
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-118262 -n no-preload-118262: exit status 2 (324.30487ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-118262" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-118262 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-118262 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.649µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-118262 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-118262
helpers_test.go:244: (dbg) docker inspect no-preload-118262:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362",
	        "Created": "2025-12-17T11:45:23.889791979Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3213113,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T11:55:54.36927291Z",
	            "FinishedAt": "2025-12-17T11:55:53.009633374Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/hostname",
	        "HostsPath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/hosts",
	        "LogPath": "/var/lib/docker/containers/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362/4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362-json.log",
	        "Name": "/no-preload-118262",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-118262:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-118262",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4578079103f7f9b494568c4fa8b014b5efefc309def78da8cf04e623a5051362",
	                "LowerDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82-init/diff:/var/lib/docker/overlay2/aa1c3cb837db05afa9c265c464cc269fa9c11658f422c1c8858e1287ac952f12/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fdd077f2c6605806598ef9b7af88158242ef501d0b0a69220f9ac13c43552d82/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-118262",
	                "Source": "/var/lib/docker/volumes/no-preload-118262/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-118262",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-118262",
	                "name.minikube.sigs.k8s.io": "no-preload-118262",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a5bb1af38cbf7e52f627da4de2cc21445576f9ee9ac16469472822e1e4e3c56f",
	            "SandboxKey": "/var/run/docker/netns/a5bb1af38cbf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36048"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36049"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36052"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36051"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-118262": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:fb:41:14:2f:52",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3227851744df2bdac9c367dc789ddfe2892f877b7b9b947cdcd81cb2897c4ba1",
	                    "EndpointID": "c35288f197473390678d887f2fedc1b13457164e1aa2e715d8bd350b76e059bf",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-118262",
	                        "4578079103f7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-118262 -n no-preload-118262
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-118262 -n no-preload-118262: exit status 2 (332.247899ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-118262 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p enable-default-cni-348887 sudo systemctl status kubelet --all --full --no-pager                                                             │ enable-default-cni-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:14 UTC │ 17 Dec 25 12:14 UTC │
	│ ssh     │ -p enable-default-cni-348887 sudo systemctl cat kubelet --no-pager                                                                             │ enable-default-cni-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:14 UTC │ 17 Dec 25 12:14 UTC │
	│ ssh     │ -p enable-default-cni-348887 sudo journalctl -xeu kubelet --all --full --no-pager                                                              │ enable-default-cni-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:14 UTC │ 17 Dec 25 12:14 UTC │
	│ ssh     │ -p enable-default-cni-348887 sudo cat /etc/kubernetes/kubelet.conf                                                                             │ enable-default-cni-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:14 UTC │ 17 Dec 25 12:14 UTC │
	│ ssh     │ -p enable-default-cni-348887 sudo cat /var/lib/kubelet/config.yaml                                                                             │ enable-default-cni-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:14 UTC │ 17 Dec 25 12:14 UTC │
	│ ssh     │ -p enable-default-cni-348887 sudo systemctl status docker --all --full --no-pager                                                              │ enable-default-cni-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:14 UTC │                     │
	│ ssh     │ -p enable-default-cni-348887 sudo systemctl cat docker --no-pager                                                                              │ enable-default-cni-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:14 UTC │ 17 Dec 25 12:14 UTC │
	│ ssh     │ -p enable-default-cni-348887 sudo cat /etc/docker/daemon.json                                                                                  │ enable-default-cni-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:14 UTC │                     │
	│ ssh     │ -p enable-default-cni-348887 sudo docker system info                                                                                           │ enable-default-cni-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:14 UTC │                     │
	│ ssh     │ -p enable-default-cni-348887 sudo systemctl status cri-docker --all --full --no-pager                                                          │ enable-default-cni-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:14 UTC │                     │
	│ ssh     │ -p enable-default-cni-348887 sudo systemctl cat cri-docker --no-pager                                                                          │ enable-default-cni-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:14 UTC │ 17 Dec 25 12:14 UTC │
	│ ssh     │ -p enable-default-cni-348887 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                     │ enable-default-cni-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:14 UTC │                     │
	│ ssh     │ -p enable-default-cni-348887 sudo cat /usr/lib/systemd/system/cri-docker.service                                                               │ enable-default-cni-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:14 UTC │ 17 Dec 25 12:14 UTC │
	│ ssh     │ -p enable-default-cni-348887 sudo cri-dockerd --version                                                                                        │ enable-default-cni-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:14 UTC │ 17 Dec 25 12:14 UTC │
	│ ssh     │ -p enable-default-cni-348887 sudo systemctl status containerd --all --full --no-pager                                                          │ enable-default-cni-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:14 UTC │ 17 Dec 25 12:14 UTC │
	│ ssh     │ -p enable-default-cni-348887 sudo systemctl cat containerd --no-pager                                                                          │ enable-default-cni-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:14 UTC │ 17 Dec 25 12:14 UTC │
	│ ssh     │ -p enable-default-cni-348887 sudo cat /lib/systemd/system/containerd.service                                                                   │ enable-default-cni-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:14 UTC │ 17 Dec 25 12:14 UTC │
	│ ssh     │ -p enable-default-cni-348887 sudo cat /etc/containerd/config.toml                                                                              │ enable-default-cni-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:14 UTC │ 17 Dec 25 12:14 UTC │
	│ ssh     │ -p enable-default-cni-348887 sudo containerd config dump                                                                                       │ enable-default-cni-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:14 UTC │ 17 Dec 25 12:14 UTC │
	│ ssh     │ -p enable-default-cni-348887 sudo systemctl status crio --all --full --no-pager                                                                │ enable-default-cni-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:14 UTC │                     │
	│ ssh     │ -p enable-default-cni-348887 sudo systemctl cat crio --no-pager                                                                                │ enable-default-cni-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:14 UTC │ 17 Dec 25 12:14 UTC │
	│ ssh     │ -p enable-default-cni-348887 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                      │ enable-default-cni-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:14 UTC │ 17 Dec 25 12:14 UTC │
	│ ssh     │ -p enable-default-cni-348887 sudo crio config                                                                                                  │ enable-default-cni-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:14 UTC │ 17 Dec 25 12:14 UTC │
	│ delete  │ -p enable-default-cni-348887                                                                                                                   │ enable-default-cni-348887 │ jenkins │ v1.37.0 │ 17 Dec 25 12:14 UTC │ 17 Dec 25 12:14 UTC │
	│ start   │ -p flannel-348887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd │ flannel-348887            │ jenkins │ v1.37.0 │ 17 Dec 25 12:14 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 12:14:32
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 12:14:32.532496 3273554 out.go:360] Setting OutFile to fd 1 ...
	I1217 12:14:32.532661 3273554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:14:32.532671 3273554 out.go:374] Setting ErrFile to fd 2...
	I1217 12:14:32.532677 3273554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:14:32.532927 3273554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 12:14:32.533351 3273554 out.go:368] Setting JSON to false
	I1217 12:14:32.534254 3273554 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":64623,"bootTime":1765909050,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 12:14:32.534324 3273554 start.go:143] virtualization:  
	I1217 12:14:32.538646 3273554 out.go:179] * [flannel-348887] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 12:14:32.543307 3273554 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 12:14:32.543454 3273554 notify.go:221] Checking for updates...
	I1217 12:14:32.550165 3273554 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 12:14:32.553381 3273554 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 12:14:32.556515 3273554 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 12:14:32.559543 3273554 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 12:14:32.562508 3273554 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 12:14:32.566131 3273554 config.go:182] Loaded profile config "no-preload-118262": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 12:14:32.566227 3273554 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 12:14:32.603774 3273554 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 12:14:32.603983 3273554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 12:14:32.665934 3273554 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 12:14:32.655867864 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 12:14:32.666048 3273554 docker.go:319] overlay module found
	I1217 12:14:32.669487 3273554 out.go:179] * Using the docker driver based on user configuration
	I1217 12:14:32.672471 3273554 start.go:309] selected driver: docker
	I1217 12:14:32.672489 3273554 start.go:927] validating driver "docker" against <nil>
	I1217 12:14:32.672503 3273554 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 12:14:32.673244 3273554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 12:14:32.727940 3273554 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 12:14:32.718227111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 12:14:32.728146 3273554 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 12:14:32.728510 3273554 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 12:14:32.731605 3273554 out.go:179] * Using Docker driver with root privileges
	I1217 12:14:32.734473 3273554 cni.go:84] Creating CNI manager for "flannel"
	I1217 12:14:32.734501 3273554 start_flags.go:336] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1217 12:14:32.734593 3273554 start.go:353] cluster config:
	{Name:flannel-348887 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:flannel-348887 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:14:32.737885 3273554 out.go:179] * Starting "flannel-348887" primary control-plane node in "flannel-348887" cluster
	I1217 12:14:32.740807 3273554 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 12:14:32.743781 3273554 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 12:14:32.746719 3273554 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1217 12:14:32.746775 3273554 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4
	I1217 12:14:32.746786 3273554 cache.go:65] Caching tarball of preloaded images
	I1217 12:14:32.746825 3273554 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 12:14:32.746876 3273554 preload.go:238] Found /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1217 12:14:32.746887 3273554 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on containerd
	I1217 12:14:32.747002 3273554 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/config.json ...
	I1217 12:14:32.747019 3273554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/config.json: {Name:mk3c202e2e65e0274c20949407a70aba7ffa09cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:14:32.771810 3273554 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 12:14:32.771836 3273554 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 12:14:32.771852 3273554 cache.go:243] Successfully downloaded all kic artifacts
	I1217 12:14:32.771883 3273554 start.go:360] acquireMachinesLock for flannel-348887: {Name:mk2d76f62710f0a3711968aac60d865c64dc062c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 12:14:32.771998 3273554 start.go:364] duration metric: took 89.237µs to acquireMachinesLock for "flannel-348887"
	I1217 12:14:32.772031 3273554 start.go:93] Provisioning new machine with config: &{Name:flannel-348887 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:flannel-348887 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 12:14:32.772104 3273554 start.go:125] createHost starting for "" (driver="docker")
	I1217 12:14:32.775677 3273554 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 12:14:32.775976 3273554 start.go:159] libmachine.API.Create for "flannel-348887" (driver="docker")
	I1217 12:14:32.776031 3273554 client.go:173] LocalClient.Create starting
	I1217 12:14:32.776133 3273554 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem
	I1217 12:14:32.776201 3273554 main.go:143] libmachine: Decoding PEM data...
	I1217 12:14:32.776228 3273554 main.go:143] libmachine: Parsing certificate...
	I1217 12:14:32.776307 3273554 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem
	I1217 12:14:32.776360 3273554 main.go:143] libmachine: Decoding PEM data...
	I1217 12:14:32.776379 3273554 main.go:143] libmachine: Parsing certificate...
	I1217 12:14:32.776822 3273554 cli_runner.go:164] Run: docker network inspect flannel-348887 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 12:14:32.796989 3273554 cli_runner.go:211] docker network inspect flannel-348887 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 12:14:32.797066 3273554 network_create.go:284] running [docker network inspect flannel-348887] to gather additional debugging logs...
	I1217 12:14:32.797095 3273554 cli_runner.go:164] Run: docker network inspect flannel-348887
	W1217 12:14:32.813173 3273554 cli_runner.go:211] docker network inspect flannel-348887 returned with exit code 1
	I1217 12:14:32.813202 3273554 network_create.go:287] error running [docker network inspect flannel-348887]: docker network inspect flannel-348887: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network flannel-348887 not found
	I1217 12:14:32.813216 3273554 network_create.go:289] output of [docker network inspect flannel-348887]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network flannel-348887 not found
	
	** /stderr **
	I1217 12:14:32.813316 3273554 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 12:14:32.830820 3273554 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f429477a79c4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:ea:a9:f2:52:01} reservation:<nil>}
	I1217 12:14:32.831167 3273554 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e0545776686c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:70:9e:49:ed:7d} reservation:<nil>}
	I1217 12:14:32.831527 3273554 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-279becfad84b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:b7:62:6e:a9:ee} reservation:<nil>}
	I1217 12:14:32.831973 3273554 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019cd8e0}
	I1217 12:14:32.831994 3273554 network_create.go:124] attempt to create docker network flannel-348887 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1217 12:14:32.832058 3273554 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=flannel-348887 flannel-348887
	I1217 12:14:32.889443 3273554 network_create.go:108] docker network flannel-348887 192.168.76.0/24 created
	I1217 12:14:32.889479 3273554 kic.go:121] calculated static IP "192.168.76.2" for the "flannel-348887" container
	I1217 12:14:32.889556 3273554 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 12:14:32.906345 3273554 cli_runner.go:164] Run: docker volume create flannel-348887 --label name.minikube.sigs.k8s.io=flannel-348887 --label created_by.minikube.sigs.k8s.io=true
	I1217 12:14:32.924857 3273554 oci.go:103] Successfully created a docker volume flannel-348887
	I1217 12:14:32.924949 3273554 cli_runner.go:164] Run: docker run --rm --name flannel-348887-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-348887 --entrypoint /usr/bin/test -v flannel-348887:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 12:14:33.435183 3273554 oci.go:107] Successfully prepared a docker volume flannel-348887
	I1217 12:14:33.435258 3273554 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1217 12:14:33.435274 3273554 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 12:14:33.435339 3273554 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v flannel-348887:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 12:14:37.541269 3273554 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v flannel-348887:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.105894871s)
	I1217 12:14:37.541302 3273554 kic.go:203] duration metric: took 4.10602515s to extract preloaded images to volume ...
	W1217 12:14:37.541479 3273554 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 12:14:37.541597 3273554 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 12:14:37.599468 3273554 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname flannel-348887 --name flannel-348887 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-348887 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=flannel-348887 --network flannel-348887 --ip 192.168.76.2 --volume flannel-348887:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 12:14:37.887234 3273554 cli_runner.go:164] Run: docker container inspect flannel-348887 --format={{.State.Running}}
	I1217 12:14:37.912266 3273554 cli_runner.go:164] Run: docker container inspect flannel-348887 --format={{.State.Status}}
	I1217 12:14:37.939839 3273554 cli_runner.go:164] Run: docker exec flannel-348887 stat /var/lib/dpkg/alternatives/iptables
	I1217 12:14:37.994439 3273554 oci.go:144] the created container "flannel-348887" has a running status.
	I1217 12:14:37.994498 3273554 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/flannel-348887/id_ed25519...
	I1217 12:14:38.003122 3273554 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/flannel-348887/id_ed25519.pub --> /home/docker/.ssh/authorized_keys (81 bytes)
	I1217 12:14:38.031172 3273554 cli_runner.go:164] Run: docker container inspect flannel-348887 --format={{.State.Status}}
	I1217 12:14:38.058561 3273554 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 12:14:38.058580 3273554 kic_runner.go:114] Args: [docker exec --privileged flannel-348887 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 12:14:38.135940 3273554 cli_runner.go:164] Run: docker container inspect flannel-348887 --format={{.State.Status}}
	I1217 12:14:38.158384 3273554 machine.go:94] provisionDockerMachine start ...
	I1217 12:14:38.158469 3273554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-348887
	I1217 12:14:38.179234 3273554 main.go:143] libmachine: Using SSH client type: native
	I1217 12:14:38.179348 3273554 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36083 <nil> <nil>}
	I1217 12:14:38.179356 3273554 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 12:14:38.179877 3273554 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59920->127.0.0.1:36083: read: connection reset by peer
	I1217 12:14:41.312111 3273554 main.go:143] libmachine: SSH cmd err, output: <nil>: flannel-348887
	
	I1217 12:14:41.312193 3273554 ubuntu.go:182] provisioning hostname "flannel-348887"
	I1217 12:14:41.312289 3273554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-348887
	I1217 12:14:41.330079 3273554 main.go:143] libmachine: Using SSH client type: native
	I1217 12:14:41.330194 3273554 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36083 <nil> <nil>}
	I1217 12:14:41.330211 3273554 main.go:143] libmachine: About to run SSH command:
	sudo hostname flannel-348887 && echo "flannel-348887" | sudo tee /etc/hostname
	I1217 12:14:41.470769 3273554 main.go:143] libmachine: SSH cmd err, output: <nil>: flannel-348887
	
	I1217 12:14:41.470878 3273554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-348887
	I1217 12:14:41.490153 3273554 main.go:143] libmachine: Using SSH client type: native
	I1217 12:14:41.490267 3273554 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil>  [] 0s} 127.0.0.1 36083 <nil> <nil>}
	I1217 12:14:41.490287 3273554 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-348887' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-348887/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-348887' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 12:14:41.632915 3273554 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 12:14:41.632943 3273554 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
	I1217 12:14:41.632975 3273554 ubuntu.go:190] setting up certificates
	I1217 12:14:41.632984 3273554 provision.go:84] configureAuth start
	I1217 12:14:41.633056 3273554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-348887
	I1217 12:14:41.650945 3273554 provision.go:143] copyHostCerts
	I1217 12:14:41.651011 3273554 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
	I1217 12:14:41.651020 3273554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
	I1217 12:14:41.651099 3273554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
	I1217 12:14:41.651192 3273554 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
	I1217 12:14:41.651198 3273554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
	I1217 12:14:41.651223 3273554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
	I1217 12:14:41.651275 3273554 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
	I1217 12:14:41.651279 3273554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
	I1217 12:14:41.651302 3273554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
	I1217 12:14:41.651345 3273554 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.flannel-348887 san=[127.0.0.1 192.168.76.2 flannel-348887 localhost minikube]
	I1217 12:14:41.787766 3273554 provision.go:177] copyRemoteCerts
	I1217 12:14:41.787893 3273554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 12:14:41.787965 3273554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-348887
	I1217 12:14:41.805272 3273554 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36083 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/flannel-348887/id_ed25519 Username:docker}
	I1217 12:14:41.900151 3273554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 12:14:41.917585 3273554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1217 12:14:41.935266 3273554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 12:14:41.953260 3273554 provision.go:87] duration metric: took 320.252872ms to configureAuth
	I1217 12:14:41.953288 3273554 ubuntu.go:206] setting minikube options for container-runtime
	I1217 12:14:41.953467 3273554 config.go:182] Loaded profile config "flannel-348887": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1217 12:14:41.953484 3273554 machine.go:97] duration metric: took 3.795081963s to provisionDockerMachine
	I1217 12:14:41.953491 3273554 client.go:176] duration metric: took 9.177448663s to LocalClient.Create
	I1217 12:14:41.953505 3273554 start.go:167] duration metric: took 9.177530385s to libmachine.API.Create "flannel-348887"
	I1217 12:14:41.953512 3273554 start.go:293] postStartSetup for "flannel-348887" (driver="docker")
	I1217 12:14:41.953528 3273554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 12:14:41.953584 3273554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 12:14:41.953630 3273554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-348887
	I1217 12:14:41.970414 3273554 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36083 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/flannel-348887/id_ed25519 Username:docker}
	I1217 12:14:42.073789 3273554 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 12:14:42.078423 3273554 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 12:14:42.078469 3273554 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 12:14:42.078484 3273554 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
	I1217 12:14:42.078561 3273554 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
	I1217 12:14:42.078650 3273554 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
	I1217 12:14:42.078780 3273554 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 12:14:42.088126 3273554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 12:14:42.111604 3273554 start.go:296] duration metric: took 158.075816ms for postStartSetup
	I1217 12:14:42.112075 3273554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-348887
	I1217 12:14:42.132778 3273554 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/config.json ...
	I1217 12:14:42.133151 3273554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 12:14:42.133212 3273554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-348887
	I1217 12:14:42.196122 3273554 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36083 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/flannel-348887/id_ed25519 Username:docker}
	I1217 12:14:42.294461 3273554 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 12:14:42.300101 3273554 start.go:128] duration metric: took 9.5279811s to createHost
	I1217 12:14:42.300130 3273554 start.go:83] releasing machines lock for "flannel-348887", held for 9.528116514s
	I1217 12:14:42.300218 3273554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-348887
	I1217 12:14:42.319317 3273554 ssh_runner.go:195] Run: cat /version.json
	I1217 12:14:42.319376 3273554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-348887
	I1217 12:14:42.319680 3273554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 12:14:42.319753 3273554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-348887
	I1217 12:14:42.338801 3273554 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36083 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/flannel-348887/id_ed25519 Username:docker}
	I1217 12:14:42.358624 3273554 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36083 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/flannel-348887/id_ed25519 Username:docker}
	I1217 12:14:42.436235 3273554 ssh_runner.go:195] Run: systemctl --version
	I1217 12:14:42.531560 3273554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 12:14:42.536935 3273554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 12:14:42.537024 3273554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 12:14:42.566507 3273554 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 12:14:42.566532 3273554 start.go:496] detecting cgroup driver to use...
	I1217 12:14:42.566584 3273554 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 12:14:42.566654 3273554 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1217 12:14:42.583459 3273554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 12:14:42.597568 3273554 docker.go:218] disabling cri-docker service (if available) ...
	I1217 12:14:42.597634 3273554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 12:14:42.615484 3273554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 12:14:42.634601 3273554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 12:14:42.766203 3273554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 12:14:42.891723 3273554 docker.go:234] disabling docker service ...
	I1217 12:14:42.891834 3273554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 12:14:42.913908 3273554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 12:14:42.926986 3273554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 12:14:43.048170 3273554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 12:14:43.172927 3273554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 12:14:43.187384 3273554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 12:14:43.202548 3273554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 12:14:43.212142 3273554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 12:14:43.221984 3273554 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 12:14:43.222104 3273554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 12:14:43.231187 3273554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 12:14:43.240113 3273554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 12:14:43.251110 3273554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 12:14:43.260930 3273554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 12:14:43.269218 3273554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 12:14:43.278593 3273554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 12:14:43.287407 3273554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 12:14:43.296652 3273554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 12:14:43.304852 3273554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 12:14:43.312959 3273554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:14:43.425320 3273554 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 12:14:43.566111 3273554 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1217 12:14:43.566237 3273554 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1217 12:14:43.570314 3273554 start.go:564] Will wait 60s for crictl version
	I1217 12:14:43.570388 3273554 ssh_runner.go:195] Run: which crictl
	I1217 12:14:43.574042 3273554 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 12:14:43.601861 3273554 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1217 12:14:43.601947 3273554 ssh_runner.go:195] Run: containerd --version
	I1217 12:14:43.623057 3273554 ssh_runner.go:195] Run: containerd --version
	I1217 12:14:43.648380 3273554 out.go:179] * Preparing Kubernetes v1.34.3 on containerd 2.2.0 ...
	I1217 12:14:43.651421 3273554 cli_runner.go:164] Run: docker network inspect flannel-348887 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 12:14:43.667438 3273554 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 12:14:43.671222 3273554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 12:14:43.680734 3273554 kubeadm.go:884] updating cluster {Name:flannel-348887 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:flannel-348887 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 12:14:43.680849 3273554 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1217 12:14:43.680923 3273554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:14:43.709722 3273554 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 12:14:43.709746 3273554 containerd.go:534] Images already preloaded, skipping extraction
	I1217 12:14:43.709824 3273554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:14:43.733566 3273554 containerd.go:627] all images are preloaded for containerd runtime.
	I1217 12:14:43.733631 3273554 cache_images.go:86] Images are preloaded, skipping loading
	I1217 12:14:43.733645 3273554 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.3 containerd true true} ...
	I1217 12:14:43.733744 3273554 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-348887 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:flannel-348887 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I1217 12:14:43.733820 3273554 ssh_runner.go:195] Run: sudo crictl info
	I1217 12:14:43.759319 3273554 cni.go:84] Creating CNI manager for "flannel"
	I1217 12:14:43.759352 3273554 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 12:14:43.759375 3273554 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-348887 NodeName:flannel-348887 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 12:14:43.759501 3273554 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "flannel-348887"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 12:14:43.759573 3273554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 12:14:43.767349 3273554 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 12:14:43.767422 3273554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 12:14:43.774584 3273554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1217 12:14:43.787162 3273554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 12:14:43.799817 3273554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1217 12:14:43.812998 3273554 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 12:14:43.816529 3273554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 12:14:43.826480 3273554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:14:43.942262 3273554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 12:14:43.958370 3273554 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887 for IP: 192.168.76.2
	I1217 12:14:43.958437 3273554 certs.go:195] generating shared ca certs ...
	I1217 12:14:43.958467 3273554 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:14:43.958641 3273554 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
	I1217 12:14:43.958726 3273554 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
	I1217 12:14:43.958762 3273554 certs.go:257] generating profile certs ...
	I1217 12:14:43.958843 3273554 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/client.key
	I1217 12:14:43.958910 3273554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/client.crt with IP's: []
	I1217 12:14:44.727021 3273554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/client.crt ...
	I1217 12:14:44.727055 3273554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/client.crt: {Name:mk57d5cb174237eb88fe4f1f0472749ef3b23a34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:14:44.727257 3273554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/client.key ...
	I1217 12:14:44.727274 3273554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/client.key: {Name:mkd8c22f1ec0c52393f4afe901a6a81f3921b82a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:14:44.727372 3273554 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/apiserver.key.dc0f55c0
	I1217 12:14:44.727389 3273554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/apiserver.crt.dc0f55c0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1217 12:14:45.071573 3273554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/apiserver.crt.dc0f55c0 ...
	I1217 12:14:45.071615 3273554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/apiserver.crt.dc0f55c0: {Name:mke30d02f501c387d70cadaa6e9f46ec3f31af41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:14:45.071827 3273554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/apiserver.key.dc0f55c0 ...
	I1217 12:14:45.071844 3273554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/apiserver.key.dc0f55c0: {Name:mkc797872374380bcf90fec8d8f4b84d28712549 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:14:45.071942 3273554 certs.go:382] copying /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/apiserver.crt.dc0f55c0 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/apiserver.crt
	I1217 12:14:45.072038 3273554 certs.go:386] copying /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/apiserver.key.dc0f55c0 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/apiserver.key
	I1217 12:14:45.072110 3273554 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/proxy-client.key
	I1217 12:14:45.072130 3273554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/proxy-client.crt with IP's: []
	I1217 12:14:45.788710 3273554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/proxy-client.crt ...
	I1217 12:14:45.788744 3273554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/proxy-client.crt: {Name:mke810498393a1b31749d1dc4e691e60b9c3d780 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:14:45.788962 3273554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/proxy-client.key ...
	I1217 12:14:45.788979 3273554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/proxy-client.key: {Name:mk9c28c08df826c15344fe57c7f10a119ca9a90b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:14:45.789221 3273554 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
	W1217 12:14:45.789263 3273554 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
	I1217 12:14:45.789272 3273554 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 12:14:45.789301 3273554 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
	I1217 12:14:45.789330 3273554 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
	I1217 12:14:45.789353 3273554 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
	I1217 12:14:45.789405 3273554 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
	I1217 12:14:45.789980 3273554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 12:14:45.819075 3273554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 12:14:45.839709 3273554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 12:14:45.859712 3273554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 12:14:45.878932 3273554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 12:14:45.897695 3273554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 12:14:45.915999 3273554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 12:14:45.935157 3273554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/flannel-348887/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 12:14:45.953489 3273554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
	I1217 12:14:45.972200 3273554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
	I1217 12:14:45.990758 3273554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 12:14:46.010986 3273554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 12:14:46.024957 3273554 ssh_runner.go:195] Run: openssl version
	I1217 12:14:46.031623 3273554 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
	I1217 12:14:46.039638 3273554 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
	I1217 12:14:46.047774 3273554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
	I1217 12:14:46.051839 3273554 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
	I1217 12:14:46.051909 3273554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
	I1217 12:14:46.093378 3273554 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 12:14:46.101168 3273554 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/29245742.pem /etc/ssl/certs/3ec20f2e.0
	I1217 12:14:46.108873 3273554 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:14:46.116581 3273554 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 12:14:46.124777 3273554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:14:46.128876 3273554 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:14:46.128952 3273554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:14:46.170401 3273554 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 12:14:46.178317 3273554 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 12:14:46.186138 3273554 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
	I1217 12:14:46.193956 3273554 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
	I1217 12:14:46.201789 3273554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
	I1217 12:14:46.205901 3273554 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
	I1217 12:14:46.205977 3273554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
	I1217 12:14:46.247419 3273554 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 12:14:46.255179 3273554 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2924574.pem /etc/ssl/certs/51391683.0
	I1217 12:14:46.263253 3273554 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 12:14:46.268110 3273554 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 12:14:46.268222 3273554 kubeadm.go:401] StartCluster: {Name:flannel-348887 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:flannel-348887 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:14:46.268330 3273554 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1217 12:14:46.268445 3273554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 12:14:46.301913 3273554 cri.go:89] found id: ""
	I1217 12:14:46.302028 3273554 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 12:14:46.311661 3273554 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 12:14:46.320912 3273554 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 12:14:46.321018 3273554 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 12:14:46.329062 3273554 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 12:14:46.329082 3273554 kubeadm.go:158] found existing configuration files:
	
	I1217 12:14:46.329136 3273554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 12:14:46.337100 3273554 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 12:14:46.337191 3273554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 12:14:46.344515 3273554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 12:14:46.352338 3273554 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 12:14:46.352406 3273554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 12:14:46.360294 3273554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 12:14:46.368496 3273554 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 12:14:46.368556 3273554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 12:14:46.375869 3273554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 12:14:46.383747 3273554 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 12:14:46.383821 3273554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 12:14:46.391872 3273554 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 12:14:46.432365 3273554 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 12:14:46.432452 3273554 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 12:14:46.461672 3273554 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 12:14:46.461810 3273554 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 12:14:46.461881 3273554 kubeadm.go:319] OS: Linux
	I1217 12:14:46.461951 3273554 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 12:14:46.462065 3273554 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 12:14:46.462146 3273554 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 12:14:46.462216 3273554 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 12:14:46.462286 3273554 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 12:14:46.462364 3273554 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 12:14:46.462432 3273554 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 12:14:46.462500 3273554 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 12:14:46.462578 3273554 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 12:14:46.539443 3273554 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 12:14:46.539615 3273554 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 12:14:46.539744 3273554 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 12:14:46.549023 3273554 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 12:14:46.555569 3273554 out.go:252]   - Generating certificates and keys ...
	I1217 12:14:46.555732 3273554 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 12:14:46.555820 3273554 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 12:14:46.926343 3273554 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 12:14:47.587428 3273554 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 12:14:48.096969 3273554 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 12:14:48.219322 3273554 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 12:14:48.597626 3273554 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 12:14:48.598023 3273554 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [flannel-348887 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 12:14:49.450631 3273554 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 12:14:49.450987 3273554 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [flannel-348887 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 12:14:50.026346 3273554 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 12:14:50.281132 3273554 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 12:14:50.511250 3273554 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 12:14:50.511550 3273554 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 12:14:51.060819 3273554 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 12:14:51.268200 3273554 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 12:14:51.712334 3273554 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 12:14:51.890059 3273554 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 12:14:52.613320 3273554 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 12:14:52.614072 3273554 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 12:14:52.616782 3273554 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 12:14:52.620329 3273554 out.go:252]   - Booting up control plane ...
	I1217 12:14:52.620459 3273554 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 12:14:52.620539 3273554 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 12:14:52.622425 3273554 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 12:14:52.639821 3273554 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 12:14:52.639952 3273554 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 12:14:52.648054 3273554 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 12:14:52.648538 3273554 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 12:14:52.648776 3273554 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 12:14:52.792117 3273554 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 12:14:52.792238 3273554 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 12:14:54.292943 3273554 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500893573s
	I1217 12:14:54.296504 3273554 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 12:14:54.296608 3273554 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1217 12:14:54.296984 3273554 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 12:14:54.297094 3273554 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 12:14:57.990444 3273554 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.693712104s
	I1217 12:14:59.931498 3273554 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.63501968s
	I1217 12:15:01.298941 3273554 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.002383392s
	I1217 12:15:01.338200 3273554 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 12:15:01.359877 3273554 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 12:15:01.375452 3273554 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 12:15:01.375666 3273554 kubeadm.go:319] [mark-control-plane] Marking the node flannel-348887 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 12:15:01.388229 3273554 kubeadm.go:319] [bootstrap-token] Using token: j8603m.2jtkbve9zufojdh9
	I1217 12:15:01.391206 3273554 out.go:252]   - Configuring RBAC rules ...
	I1217 12:15:01.391342 3273554 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 12:15:01.396364 3273554 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 12:15:01.407388 3273554 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 12:15:01.414296 3273554 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 12:15:01.419445 3273554 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 12:15:01.424513 3273554 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 12:15:01.708374 3273554 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 12:15:02.143068 3273554 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 12:15:02.705780 3273554 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 12:15:02.707287 3273554 kubeadm.go:319] 
	I1217 12:15:02.707365 3273554 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 12:15:02.707379 3273554 kubeadm.go:319] 
	I1217 12:15:02.707458 3273554 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 12:15:02.707469 3273554 kubeadm.go:319] 
	I1217 12:15:02.707495 3273554 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 12:15:02.707560 3273554 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 12:15:02.707613 3273554 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 12:15:02.707631 3273554 kubeadm.go:319] 
	I1217 12:15:02.707695 3273554 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 12:15:02.707705 3273554 kubeadm.go:319] 
	I1217 12:15:02.707753 3273554 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 12:15:02.707761 3273554 kubeadm.go:319] 
	I1217 12:15:02.707813 3273554 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 12:15:02.707897 3273554 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 12:15:02.707971 3273554 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 12:15:02.707977 3273554 kubeadm.go:319] 
	I1217 12:15:02.708063 3273554 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 12:15:02.708145 3273554 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 12:15:02.708153 3273554 kubeadm.go:319] 
	I1217 12:15:02.708237 3273554 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token j8603m.2jtkbve9zufojdh9 \
	I1217 12:15:02.708344 3273554 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fcce8c321665b01ba73c1ff2f8ce9b2c8663c804203e09b134f0c8209e98634e \
	I1217 12:15:02.708369 3273554 kubeadm.go:319] 	--control-plane 
	I1217 12:15:02.708374 3273554 kubeadm.go:319] 
	I1217 12:15:02.708489 3273554 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 12:15:02.708500 3273554 kubeadm.go:319] 
	I1217 12:15:02.708583 3273554 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token j8603m.2jtkbve9zufojdh9 \
	I1217 12:15:02.708690 3273554 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fcce8c321665b01ba73c1ff2f8ce9b2c8663c804203e09b134f0c8209e98634e 
	I1217 12:15:02.712556 3273554 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1217 12:15:02.712789 3273554 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 12:15:02.712898 3273554 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 12:15:02.712921 3273554 cni.go:84] Creating CNI manager for "flannel"
	I1217 12:15:02.715975 3273554 out.go:179] * Configuring Flannel (Container Networking Interface) ...
	I1217 12:15:02.718975 3273554 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 12:15:02.723496 3273554 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 12:15:02.723524 3273554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4415 bytes)
	I1217 12:15:02.739509 3273554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 12:15:03.172739 3273554 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 12:15:03.172873 3273554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:15:03.172975 3273554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-348887 minikube.k8s.io/updated_at=2025_12_17T12_15_03_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144 minikube.k8s.io/name=flannel-348887 minikube.k8s.io/primary=true
	I1217 12:15:03.196971 3273554 ops.go:34] apiserver oom_adj: -16
	I1217 12:15:03.428356 3273554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:15:03.929248 3273554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:15:04.428539 3273554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:15:04.928595 3273554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:15:05.429179 3273554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:15:05.928472 3273554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:15:06.428546 3273554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:15:06.928827 3273554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:15:07.428544 3273554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:15:07.547920 3273554 kubeadm.go:1114] duration metric: took 4.375095179s to wait for elevateKubeSystemPrivileges
	I1217 12:15:07.547948 3273554 kubeadm.go:403] duration metric: took 21.279729494s to StartCluster
	I1217 12:15:07.547966 3273554 settings.go:142] acquiring lock: {Name:mkbe14d68dd5ff3fa1749157c50305d115f5a24d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:15:07.548031 3273554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 12:15:07.549052 3273554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/kubeconfig: {Name:mk7638fe728a7a45ed6de1a7db2d4b63e30a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:15:07.549280 3273554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 12:15:07.549286 3273554 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1217 12:15:07.549543 3273554 config.go:182] Loaded profile config "flannel-348887": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1217 12:15:07.549600 3273554 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 12:15:07.549676 3273554 addons.go:70] Setting storage-provisioner=true in profile "flannel-348887"
	I1217 12:15:07.549693 3273554 addons.go:239] Setting addon storage-provisioner=true in "flannel-348887"
	I1217 12:15:07.549718 3273554 host.go:66] Checking if "flannel-348887" exists ...
	I1217 12:15:07.550210 3273554 cli_runner.go:164] Run: docker container inspect flannel-348887 --format={{.State.Status}}
	I1217 12:15:07.550637 3273554 addons.go:70] Setting default-storageclass=true in profile "flannel-348887"
	I1217 12:15:07.550658 3273554 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "flannel-348887"
	I1217 12:15:07.550936 3273554 cli_runner.go:164] Run: docker container inspect flannel-348887 --format={{.State.Status}}
	I1217 12:15:07.553485 3273554 out.go:179] * Verifying Kubernetes components...
	I1217 12:15:07.556592 3273554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:15:07.593458 3273554 addons.go:239] Setting addon default-storageclass=true in "flannel-348887"
	I1217 12:15:07.593495 3273554 host.go:66] Checking if "flannel-348887" exists ...
	I1217 12:15:07.593947 3273554 cli_runner.go:164] Run: docker container inspect flannel-348887 --format={{.State.Status}}
	I1217 12:15:07.596091 3273554 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 12:15:07.599371 3273554 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:15:07.599394 3273554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 12:15:07.599457 3273554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-348887
	I1217 12:15:07.634278 3273554 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 12:15:07.634300 3273554 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 12:15:07.634359 3273554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-348887
	I1217 12:15:07.647367 3273554 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36083 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/flannel-348887/id_ed25519 Username:docker}
	I1217 12:15:07.673022 3273554 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:36083 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/flannel-348887/id_ed25519 Username:docker}
	I1217 12:15:07.977048 3273554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 12:15:07.977268 3273554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 12:15:07.996513 3273554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:15:08.034942 3273554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:15:08.058923 3273554 node_ready.go:35] waiting up to 15m0s for node "flannel-348887" to be "Ready" ...
	I1217 12:15:08.659764 3273554 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1217 12:15:08.991040 3273554 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1217 12:15:08.994979 3273554 addons.go:530] duration metric: took 1.445343701s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1217 12:15:09.165724 3273554 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-348887" context rescaled to 1 replicas
	W1217 12:15:10.065799 3273554 node_ready.go:57] node "flannel-348887" has "Ready":"False" status (will retry)
	I1217 12:15:11.562487 3273554 node_ready.go:49] node "flannel-348887" is "Ready"
	I1217 12:15:11.562578 3273554 node_ready.go:38] duration metric: took 3.503609324s for node "flannel-348887" to be "Ready" ...
	I1217 12:15:11.562606 3273554 api_server.go:52] waiting for apiserver process to appear ...
	I1217 12:15:11.562710 3273554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:15:11.576028 3273554 api_server.go:72] duration metric: took 4.026690844s to wait for apiserver process to appear ...
	I1217 12:15:11.576056 3273554 api_server.go:88] waiting for apiserver healthz status ...
	I1217 12:15:11.576086 3273554 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 12:15:11.584889 3273554 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1217 12:15:11.586026 3273554 api_server.go:141] control plane version: v1.34.3
	I1217 12:15:11.586058 3273554 api_server.go:131] duration metric: took 9.993719ms to wait for apiserver health ...
	I1217 12:15:11.586068 3273554 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 12:15:11.589441 3273554 system_pods.go:59] 7 kube-system pods found
	I1217 12:15:11.589484 3273554 system_pods.go:61] "coredns-66bc5c9577-9nl2s" [b1418ae7-bcc1-43bd-8069-143f572cc52a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:15:11.589492 3273554 system_pods.go:61] "etcd-flannel-348887" [bd0870f6-d83e-491a-861b-6d90e1fbbf9e] Running
	I1217 12:15:11.589498 3273554 system_pods.go:61] "kube-apiserver-flannel-348887" [9739632a-3b71-4765-aa1a-a14da66d6164] Running
	I1217 12:15:11.589505 3273554 system_pods.go:61] "kube-controller-manager-flannel-348887" [09dd0079-b8d4-40d4-be0b-61780ef3d793] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 12:15:11.589517 3273554 system_pods.go:61] "kube-proxy-k6s5k" [176f3cf7-e0c1-4554-8aae-7ca769b30a91] Running
	I1217 12:15:11.589522 3273554 system_pods.go:61] "kube-scheduler-flannel-348887" [40c445c2-73fb-43fb-8ce3-a16419f0c68d] Running
	I1217 12:15:11.589531 3273554 system_pods.go:61] "storage-provisioner" [95b0d3dc-9b09-44cb-a230-92ecc601b22b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 12:15:11.589540 3273554 system_pods.go:74] duration metric: took 3.465158ms to wait for pod list to return data ...
	I1217 12:15:11.589554 3273554 default_sa.go:34] waiting for default service account to be created ...
	I1217 12:15:11.592497 3273554 default_sa.go:45] found service account: "default"
	I1217 12:15:11.592525 3273554 default_sa.go:55] duration metric: took 2.964993ms for default service account to be created ...
	I1217 12:15:11.592536 3273554 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 12:15:11.596057 3273554 system_pods.go:86] 7 kube-system pods found
	I1217 12:15:11.596095 3273554 system_pods.go:89] "coredns-66bc5c9577-9nl2s" [b1418ae7-bcc1-43bd-8069-143f572cc52a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:15:11.596104 3273554 system_pods.go:89] "etcd-flannel-348887" [bd0870f6-d83e-491a-861b-6d90e1fbbf9e] Running
	I1217 12:15:11.596110 3273554 system_pods.go:89] "kube-apiserver-flannel-348887" [9739632a-3b71-4765-aa1a-a14da66d6164] Running
	I1217 12:15:11.596117 3273554 system_pods.go:89] "kube-controller-manager-flannel-348887" [09dd0079-b8d4-40d4-be0b-61780ef3d793] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 12:15:11.596125 3273554 system_pods.go:89] "kube-proxy-k6s5k" [176f3cf7-e0c1-4554-8aae-7ca769b30a91] Running
	I1217 12:15:11.596129 3273554 system_pods.go:89] "kube-scheduler-flannel-348887" [40c445c2-73fb-43fb-8ce3-a16419f0c68d] Running
	I1217 12:15:11.596134 3273554 system_pods.go:89] "storage-provisioner" [95b0d3dc-9b09-44cb-a230-92ecc601b22b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 12:15:11.596157 3273554 retry.go:31] will retry after 309.847878ms: missing components: kube-dns
	I1217 12:15:11.910687 3273554 system_pods.go:86] 7 kube-system pods found
	I1217 12:15:11.910724 3273554 system_pods.go:89] "coredns-66bc5c9577-9nl2s" [b1418ae7-bcc1-43bd-8069-143f572cc52a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:15:11.910731 3273554 system_pods.go:89] "etcd-flannel-348887" [bd0870f6-d83e-491a-861b-6d90e1fbbf9e] Running
	I1217 12:15:11.910747 3273554 system_pods.go:89] "kube-apiserver-flannel-348887" [9739632a-3b71-4765-aa1a-a14da66d6164] Running
	I1217 12:15:11.910756 3273554 system_pods.go:89] "kube-controller-manager-flannel-348887" [09dd0079-b8d4-40d4-be0b-61780ef3d793] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 12:15:11.910761 3273554 system_pods.go:89] "kube-proxy-k6s5k" [176f3cf7-e0c1-4554-8aae-7ca769b30a91] Running
	I1217 12:15:11.910766 3273554 system_pods.go:89] "kube-scheduler-flannel-348887" [40c445c2-73fb-43fb-8ce3-a16419f0c68d] Running
	I1217 12:15:11.910776 3273554 system_pods.go:89] "storage-provisioner" [95b0d3dc-9b09-44cb-a230-92ecc601b22b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 12:15:11.910802 3273554 retry.go:31] will retry after 237.356906ms: missing components: kube-dns
	I1217 12:15:12.153098 3273554 system_pods.go:86] 7 kube-system pods found
	I1217 12:15:12.153143 3273554 system_pods.go:89] "coredns-66bc5c9577-9nl2s" [b1418ae7-bcc1-43bd-8069-143f572cc52a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:15:12.153151 3273554 system_pods.go:89] "etcd-flannel-348887" [bd0870f6-d83e-491a-861b-6d90e1fbbf9e] Running
	I1217 12:15:12.153162 3273554 system_pods.go:89] "kube-apiserver-flannel-348887" [9739632a-3b71-4765-aa1a-a14da66d6164] Running
	I1217 12:15:12.153173 3273554 system_pods.go:89] "kube-controller-manager-flannel-348887" [09dd0079-b8d4-40d4-be0b-61780ef3d793] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 12:15:12.153177 3273554 system_pods.go:89] "kube-proxy-k6s5k" [176f3cf7-e0c1-4554-8aae-7ca769b30a91] Running
	I1217 12:15:12.153188 3273554 system_pods.go:89] "kube-scheduler-flannel-348887" [40c445c2-73fb-43fb-8ce3-a16419f0c68d] Running
	I1217 12:15:12.153198 3273554 system_pods.go:89] "storage-provisioner" [95b0d3dc-9b09-44cb-a230-92ecc601b22b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 12:15:12.153218 3273554 retry.go:31] will retry after 475.193796ms: missing components: kube-dns
	I1217 12:15:12.633391 3273554 system_pods.go:86] 7 kube-system pods found
	I1217 12:15:12.633483 3273554 system_pods.go:89] "coredns-66bc5c9577-9nl2s" [b1418ae7-bcc1-43bd-8069-143f572cc52a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:15:12.633508 3273554 system_pods.go:89] "etcd-flannel-348887" [bd0870f6-d83e-491a-861b-6d90e1fbbf9e] Running
	I1217 12:15:12.633544 3273554 system_pods.go:89] "kube-apiserver-flannel-348887" [9739632a-3b71-4765-aa1a-a14da66d6164] Running
	I1217 12:15:12.633565 3273554 system_pods.go:89] "kube-controller-manager-flannel-348887" [09dd0079-b8d4-40d4-be0b-61780ef3d793] Running
	I1217 12:15:12.634014 3273554 system_pods.go:89] "kube-proxy-k6s5k" [176f3cf7-e0c1-4554-8aae-7ca769b30a91] Running
	I1217 12:15:12.634049 3273554 system_pods.go:89] "kube-scheduler-flannel-348887" [40c445c2-73fb-43fb-8ce3-a16419f0c68d] Running
	I1217 12:15:12.634084 3273554 system_pods.go:89] "storage-provisioner" [95b0d3dc-9b09-44cb-a230-92ecc601b22b] Running
	I1217 12:15:12.634117 3273554 retry.go:31] will retry after 461.713976ms: missing components: kube-dns
	I1217 12:15:13.099905 3273554 system_pods.go:86] 7 kube-system pods found
	I1217 12:15:13.099950 3273554 system_pods.go:89] "coredns-66bc5c9577-9nl2s" [b1418ae7-bcc1-43bd-8069-143f572cc52a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:15:13.099960 3273554 system_pods.go:89] "etcd-flannel-348887" [bd0870f6-d83e-491a-861b-6d90e1fbbf9e] Running
	I1217 12:15:13.099966 3273554 system_pods.go:89] "kube-apiserver-flannel-348887" [9739632a-3b71-4765-aa1a-a14da66d6164] Running
	I1217 12:15:13.099971 3273554 system_pods.go:89] "kube-controller-manager-flannel-348887" [09dd0079-b8d4-40d4-be0b-61780ef3d793] Running
	I1217 12:15:13.099977 3273554 system_pods.go:89] "kube-proxy-k6s5k" [176f3cf7-e0c1-4554-8aae-7ca769b30a91] Running
	I1217 12:15:13.099981 3273554 system_pods.go:89] "kube-scheduler-flannel-348887" [40c445c2-73fb-43fb-8ce3-a16419f0c68d] Running
	I1217 12:15:13.099985 3273554 system_pods.go:89] "storage-provisioner" [95b0d3dc-9b09-44cb-a230-92ecc601b22b] Running
	I1217 12:15:13.100002 3273554 retry.go:31] will retry after 660.938323ms: missing components: kube-dns
	I1217 12:15:13.766592 3273554 system_pods.go:86] 7 kube-system pods found
	I1217 12:15:13.766627 3273554 system_pods.go:89] "coredns-66bc5c9577-9nl2s" [b1418ae7-bcc1-43bd-8069-143f572cc52a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:15:13.766634 3273554 system_pods.go:89] "etcd-flannel-348887" [bd0870f6-d83e-491a-861b-6d90e1fbbf9e] Running
	I1217 12:15:13.766642 3273554 system_pods.go:89] "kube-apiserver-flannel-348887" [9739632a-3b71-4765-aa1a-a14da66d6164] Running
	I1217 12:15:13.766646 3273554 system_pods.go:89] "kube-controller-manager-flannel-348887" [09dd0079-b8d4-40d4-be0b-61780ef3d793] Running
	I1217 12:15:13.766652 3273554 system_pods.go:89] "kube-proxy-k6s5k" [176f3cf7-e0c1-4554-8aae-7ca769b30a91] Running
	I1217 12:15:13.766657 3273554 system_pods.go:89] "kube-scheduler-flannel-348887" [40c445c2-73fb-43fb-8ce3-a16419f0c68d] Running
	I1217 12:15:13.766667 3273554 system_pods.go:89] "storage-provisioner" [95b0d3dc-9b09-44cb-a230-92ecc601b22b] Running
	I1217 12:15:13.766681 3273554 retry.go:31] will retry after 754.881092ms: missing components: kube-dns
	I1217 12:15:14.525949 3273554 system_pods.go:86] 7 kube-system pods found
	I1217 12:15:14.525986 3273554 system_pods.go:89] "coredns-66bc5c9577-9nl2s" [b1418ae7-bcc1-43bd-8069-143f572cc52a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:15:14.525993 3273554 system_pods.go:89] "etcd-flannel-348887" [bd0870f6-d83e-491a-861b-6d90e1fbbf9e] Running
	I1217 12:15:14.526000 3273554 system_pods.go:89] "kube-apiserver-flannel-348887" [9739632a-3b71-4765-aa1a-a14da66d6164] Running
	I1217 12:15:14.526005 3273554 system_pods.go:89] "kube-controller-manager-flannel-348887" [09dd0079-b8d4-40d4-be0b-61780ef3d793] Running
	I1217 12:15:14.526009 3273554 system_pods.go:89] "kube-proxy-k6s5k" [176f3cf7-e0c1-4554-8aae-7ca769b30a91] Running
	I1217 12:15:14.526013 3273554 system_pods.go:89] "kube-scheduler-flannel-348887" [40c445c2-73fb-43fb-8ce3-a16419f0c68d] Running
	I1217 12:15:14.526017 3273554 system_pods.go:89] "storage-provisioner" [95b0d3dc-9b09-44cb-a230-92ecc601b22b] Running
	I1217 12:15:14.526030 3273554 retry.go:31] will retry after 1.154148327s: missing components: kube-dns
	I1217 12:15:15.683981 3273554 system_pods.go:86] 7 kube-system pods found
	I1217 12:15:15.684016 3273554 system_pods.go:89] "coredns-66bc5c9577-9nl2s" [b1418ae7-bcc1-43bd-8069-143f572cc52a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:15:15.684023 3273554 system_pods.go:89] "etcd-flannel-348887" [bd0870f6-d83e-491a-861b-6d90e1fbbf9e] Running
	I1217 12:15:15.684030 3273554 system_pods.go:89] "kube-apiserver-flannel-348887" [9739632a-3b71-4765-aa1a-a14da66d6164] Running
	I1217 12:15:15.684034 3273554 system_pods.go:89] "kube-controller-manager-flannel-348887" [09dd0079-b8d4-40d4-be0b-61780ef3d793] Running
	I1217 12:15:15.684039 3273554 system_pods.go:89] "kube-proxy-k6s5k" [176f3cf7-e0c1-4554-8aae-7ca769b30a91] Running
	I1217 12:15:15.684043 3273554 system_pods.go:89] "kube-scheduler-flannel-348887" [40c445c2-73fb-43fb-8ce3-a16419f0c68d] Running
	I1217 12:15:15.684046 3273554 system_pods.go:89] "storage-provisioner" [95b0d3dc-9b09-44cb-a230-92ecc601b22b] Running
	I1217 12:15:15.684060 3273554 retry.go:31] will retry after 1.141204036s: missing components: kube-dns
	I1217 12:15:16.828922 3273554 system_pods.go:86] 7 kube-system pods found
	I1217 12:15:16.828959 3273554 system_pods.go:89] "coredns-66bc5c9577-9nl2s" [b1418ae7-bcc1-43bd-8069-143f572cc52a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:15:16.828966 3273554 system_pods.go:89] "etcd-flannel-348887" [bd0870f6-d83e-491a-861b-6d90e1fbbf9e] Running
	I1217 12:15:16.828973 3273554 system_pods.go:89] "kube-apiserver-flannel-348887" [9739632a-3b71-4765-aa1a-a14da66d6164] Running
	I1217 12:15:16.828977 3273554 system_pods.go:89] "kube-controller-manager-flannel-348887" [09dd0079-b8d4-40d4-be0b-61780ef3d793] Running
	I1217 12:15:16.828982 3273554 system_pods.go:89] "kube-proxy-k6s5k" [176f3cf7-e0c1-4554-8aae-7ca769b30a91] Running
	I1217 12:15:16.828986 3273554 system_pods.go:89] "kube-scheduler-flannel-348887" [40c445c2-73fb-43fb-8ce3-a16419f0c68d] Running
	I1217 12:15:16.828989 3273554 system_pods.go:89] "storage-provisioner" [95b0d3dc-9b09-44cb-a230-92ecc601b22b] Running
	I1217 12:15:16.829004 3273554 retry.go:31] will retry after 1.349178084s: missing components: kube-dns
	I1217 12:15:18.182110 3273554 system_pods.go:86] 7 kube-system pods found
	I1217 12:15:18.182149 3273554 system_pods.go:89] "coredns-66bc5c9577-9nl2s" [b1418ae7-bcc1-43bd-8069-143f572cc52a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:15:18.182156 3273554 system_pods.go:89] "etcd-flannel-348887" [bd0870f6-d83e-491a-861b-6d90e1fbbf9e] Running
	I1217 12:15:18.182162 3273554 system_pods.go:89] "kube-apiserver-flannel-348887" [9739632a-3b71-4765-aa1a-a14da66d6164] Running
	I1217 12:15:18.182167 3273554 system_pods.go:89] "kube-controller-manager-flannel-348887" [09dd0079-b8d4-40d4-be0b-61780ef3d793] Running
	I1217 12:15:18.182171 3273554 system_pods.go:89] "kube-proxy-k6s5k" [176f3cf7-e0c1-4554-8aae-7ca769b30a91] Running
	I1217 12:15:18.182176 3273554 system_pods.go:89] "kube-scheduler-flannel-348887" [40c445c2-73fb-43fb-8ce3-a16419f0c68d] Running
	I1217 12:15:18.182179 3273554 system_pods.go:89] "storage-provisioner" [95b0d3dc-9b09-44cb-a230-92ecc601b22b] Running
	I1217 12:15:18.182195 3273554 retry.go:31] will retry after 2.23603007s: missing components: kube-dns
	I1217 12:15:20.424106 3273554 system_pods.go:86] 7 kube-system pods found
	I1217 12:15:20.424143 3273554 system_pods.go:89] "coredns-66bc5c9577-9nl2s" [b1418ae7-bcc1-43bd-8069-143f572cc52a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:15:20.424150 3273554 system_pods.go:89] "etcd-flannel-348887" [bd0870f6-d83e-491a-861b-6d90e1fbbf9e] Running
	I1217 12:15:20.424157 3273554 system_pods.go:89] "kube-apiserver-flannel-348887" [9739632a-3b71-4765-aa1a-a14da66d6164] Running
	I1217 12:15:20.424161 3273554 system_pods.go:89] "kube-controller-manager-flannel-348887" [09dd0079-b8d4-40d4-be0b-61780ef3d793] Running
	I1217 12:15:20.424165 3273554 system_pods.go:89] "kube-proxy-k6s5k" [176f3cf7-e0c1-4554-8aae-7ca769b30a91] Running
	I1217 12:15:20.424169 3273554 system_pods.go:89] "kube-scheduler-flannel-348887" [40c445c2-73fb-43fb-8ce3-a16419f0c68d] Running
	I1217 12:15:20.424173 3273554 system_pods.go:89] "storage-provisioner" [95b0d3dc-9b09-44cb-a230-92ecc601b22b] Running
	I1217 12:15:20.424187 3273554 retry.go:31] will retry after 2.137274427s: missing components: kube-dns
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511207273Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511268646Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511382582Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511463852Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511528459Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511597192Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511654275Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511737137Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511807372Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.511906391Z" level=info msg="Connect containerd service"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.512274624Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.513135250Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.526293232Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.526625018Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.526753222Z" level=info msg="Start subscribing containerd event"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.526875034Z" level=info msg="Start recovering state"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.563780213Z" level=info msg="Start event monitor"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.563957803Z" level=info msg="Start cni network conf syncer for default"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.564027291Z" level=info msg="Start streaming server"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.564090232Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.564145632Z" level=info msg="runtime interface starting up..."
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.564203560Z" level=info msg="starting plugins..."
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.564286234Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 11:56:00 no-preload-118262 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 17 11:56:00 no-preload-118262 containerd[555]: time="2025-12-17T11:56:00.567526269Z" level=info msg="containerd successfully booted in 0.088039s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 12:15:23.980871   10261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:15:23.981562   10261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:15:23.983116   10261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:15:23.983634   10261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 12:15:23.985333   10261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 09:31] overlayfs: idmapped layers are currently not supported
	[Dec17 09:41] overlayfs: idmapped layers are currently not supported
	[Dec17 09:43] overlayfs: idmapped layers are currently not supported
	[Dec17 09:44] overlayfs: idmapped layers are currently not supported
	[  +5.066669] overlayfs: idmapped layers are currently not supported
	[ +38.827173] overlayfs: idmapped layers are currently not supported
	[Dec17 09:45] overlayfs: idmapped layers are currently not supported
	[Dec17 09:46] overlayfs: idmapped layers are currently not supported
	[Dec17 09:48] overlayfs: idmapped layers are currently not supported
	[  +5.468161] overlayfs: idmapped layers are currently not supported
	[Dec17 09:49] overlayfs: idmapped layers are currently not supported
	[  +4.263444] overlayfs: idmapped layers are currently not supported
	[Dec17 09:50] overlayfs: idmapped layers are currently not supported
	[Dec17 10:07] overlayfs: idmapped layers are currently not supported
	[Dec17 10:08] overlayfs: idmapped layers are currently not supported
	[Dec17 10:10] overlayfs: idmapped layers are currently not supported
	[Dec17 10:11] overlayfs: idmapped layers are currently not supported
	[Dec17 10:13] overlayfs: idmapped layers are currently not supported
	[Dec17 10:15] overlayfs: idmapped layers are currently not supported
	[Dec17 10:16] overlayfs: idmapped layers are currently not supported
	[Dec17 10:21] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 12:15:24 up 17:57,  0 user,  load average: 2.24, 1.87, 1.54
	Linux no-preload-118262 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 12:15:20 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:15:21 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1545.
	Dec 17 12:15:21 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:15:21 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:15:21 no-preload-118262 kubelet[10123]: E1217 12:15:21.313440   10123 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:15:21 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:15:21 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:15:21 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1546.
	Dec 17 12:15:21 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:15:22 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:15:22 no-preload-118262 kubelet[10128]: E1217 12:15:22.064196   10128 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:15:22 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:15:22 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:15:22 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1547.
	Dec 17 12:15:22 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:15:22 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:15:22 no-preload-118262 kubelet[10133]: E1217 12:15:22.824863   10133 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:15:22 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:15:22 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 12:15:23 no-preload-118262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1548.
	Dec 17 12:15:23 no-preload-118262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:15:23 no-preload-118262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 12:15:23 no-preload-118262 kubelet[10168]: E1217 12:15:23.578388   10168 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 12:15:23 no-preload-118262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 12:15:23 no-preload-118262 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-118262 -n no-preload-118262
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-118262 -n no-preload-118262: exit status 2 (334.005277ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-118262" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (257.85s)
E1217 12:16:23.776842 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:16:44.258486 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/calico-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (345/417)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.95
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.3/json-events 4.51
13 TestDownloadOnly/v1.34.3/preload-exists 0
17 TestDownloadOnly/v1.34.3/LogsDuration 0.1
18 TestDownloadOnly/v1.34.3/DeleteAll 0.21
19 TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-rc.1/json-events 4.28
22 TestDownloadOnly/v1.35.0-rc.1/preload-exists 0
26 TestDownloadOnly/v1.35.0-rc.1/LogsDuration 0.09
27 TestDownloadOnly/v1.35.0-rc.1/DeleteAll 0.22
28 TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.61
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 126.94
38 TestAddons/serial/Volcano 40.73
40 TestAddons/serial/GCPAuth/Namespaces 0.22
41 TestAddons/serial/GCPAuth/FakeCredentials 8.89
44 TestAddons/parallel/Registry 16.44
45 TestAddons/parallel/RegistryCreds 0.77
46 TestAddons/parallel/Ingress 18.43
47 TestAddons/parallel/InspektorGadget 11
48 TestAddons/parallel/MetricsServer 6.81
50 TestAddons/parallel/CSI 40.05
51 TestAddons/parallel/Headlamp 17.77
52 TestAddons/parallel/CloudSpanner 6.18
53 TestAddons/parallel/LocalPath 51.91
54 TestAddons/parallel/NvidiaDevicePlugin 6.56
55 TestAddons/parallel/Yakd 10.82
57 TestAddons/StoppedEnableDisable 12.36
58 TestCertOptions 39.68
59 TestCertExpiration 231.19
61 TestForceSystemdFlag 35.54
62 TestForceSystemdEnv 39.19
63 TestDockerEnvContainerd 46.3
67 TestErrorSpam/setup 31.26
68 TestErrorSpam/start 0.82
69 TestErrorSpam/status 1.1
70 TestErrorSpam/pause 1.81
71 TestErrorSpam/unpause 1.87
72 TestErrorSpam/stop 1.64
75 TestFunctional/serial/CopySyncFile 0.01
76 TestFunctional/serial/StartWithProxy 48.6
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 7.18
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.12
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.52
84 TestFunctional/serial/CacheCmd/cache/add_local 1.25
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.82
89 TestFunctional/serial/CacheCmd/cache/delete 0.11
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
92 TestFunctional/serial/ExtraConfig 41.52
93 TestFunctional/serial/ComponentHealth 0.12
94 TestFunctional/serial/LogsCmd 1.52
95 TestFunctional/serial/LogsFileCmd 1.51
96 TestFunctional/serial/InvalidService 4.05
98 TestFunctional/parallel/ConfigCmd 0.5
99 TestFunctional/parallel/DashboardCmd 8.84
100 TestFunctional/parallel/DryRun 0.46
101 TestFunctional/parallel/InternationalLanguage 0.23
102 TestFunctional/parallel/StatusCmd 1.06
106 TestFunctional/parallel/ServiceCmdConnect 8.73
107 TestFunctional/parallel/AddonsCmd 0.14
108 TestFunctional/parallel/PersistentVolumeClaim 19.91
110 TestFunctional/parallel/SSHCmd 0.74
111 TestFunctional/parallel/CpCmd 2.51
113 TestFunctional/parallel/FileSync 0.36
114 TestFunctional/parallel/CertSync 2.24
118 TestFunctional/parallel/NodeLabels 0.12
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.93
122 TestFunctional/parallel/License 0.37
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.51
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/ServiceCmd/DeployApp 7.21
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
136 TestFunctional/parallel/ProfileCmd/profile_list 0.43
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
138 TestFunctional/parallel/MountCmd/any-port 8.43
139 TestFunctional/parallel/ServiceCmd/List 0.54
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
142 TestFunctional/parallel/ServiceCmd/Format 0.46
143 TestFunctional/parallel/ServiceCmd/URL 0.39
144 TestFunctional/parallel/MountCmd/specific-port 2.31
145 TestFunctional/parallel/MountCmd/VerifyCleanup 2.59
146 TestFunctional/parallel/Version/short 0.07
147 TestFunctional/parallel/Version/components 1.41
148 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
149 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
150 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
151 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
152 TestFunctional/parallel/ImageCommands/ImageBuild 4.08
153 TestFunctional/parallel/ImageCommands/Setup 0.65
154 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.26
155 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
156 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
157 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
158 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.32
159 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.54
160 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
161 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
162 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.72
163 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
164 TestFunctional/delete_echo-server_images 0.05
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote 3.3
179 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local 1.04
180 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node 0.32
183 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload 1.89
184 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete 0.12
189 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd 0.99
190 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd 0.96
193 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd 0.49
195 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun 0.46
196 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage 0.23
202 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd 0.14
205 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd 0.66
206 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd 2.26
208 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync 0.33
209 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync 1.67
215 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled 0.59
217 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License 0.21
220 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel 0.1
234 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create 0.4
235 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list 0.42
236 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output 0.39
238 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port 2.06
239 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup 1.26
240 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short 0.06
241 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components 0.51
242 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort 0.22
243 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable 0.23
244 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson 0.22
245 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml 0.22
246 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild 3.51
247 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup 0.25
248 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon 1.12
249 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon 1.09
250 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon 1.31
251 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile 0.33
252 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove 0.45
253 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile 0.98
254 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon 0.39
255 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes 0.14
256 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster 0.15
257 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters 0.15
258 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 146.07
265 TestMultiControlPlane/serial/DeployApp 7.66
266 TestMultiControlPlane/serial/PingHostFromPods 1.7
267 TestMultiControlPlane/serial/AddWorkerNode 31.08
268 TestMultiControlPlane/serial/NodeLabels 0.1
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.07
270 TestMultiControlPlane/serial/CopyFile 20.31
271 TestMultiControlPlane/serial/StopSecondaryNode 12.96
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.86
273 TestMultiControlPlane/serial/RestartSecondaryNode 13.67
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 98.37
276 TestMultiControlPlane/serial/DeleteSecondaryNode 11.17
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.82
278 TestMultiControlPlane/serial/StopCluster 36.45
279 TestMultiControlPlane/serial/RestartCluster 59.95
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
281 TestMultiControlPlane/serial/AddSecondaryNode 95.17
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.11
287 TestJSONOutput/start/Command 49.2
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
293 TestJSONOutput/pause/Command 0.74
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
299 TestJSONOutput/unpause/Command 0.64
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 6.02
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.24
312 TestKicCustomNetwork/create_custom_network 37.19
313 TestKicCustomNetwork/use_default_bridge_network 37.83
314 TestKicExistingNetwork 32.94
315 TestKicCustomSubnet 36.45
316 TestKicStaticIP 36.27
317 TestMainNoArgs 0.05
318 TestMinikubeProfile 70.35
321 TestMountStart/serial/StartWithMountFirst 8.36
322 TestMountStart/serial/VerifyMountFirst 0.27
323 TestMountStart/serial/StartWithMountSecond 7.96
324 TestMountStart/serial/VerifyMountSecond 0.26
325 TestMountStart/serial/DeleteFirst 1.72
326 TestMountStart/serial/VerifyMountPostDelete 0.28
327 TestMountStart/serial/Stop 1.29
328 TestMountStart/serial/RestartStopped 7.44
329 TestMountStart/serial/VerifyMountPostStop 0.27
332 TestMultiNode/serial/FreshStart2Nodes 80.52
333 TestMultiNode/serial/DeployApp2Nodes 5.51
334 TestMultiNode/serial/PingHostFrom2Pods 1
335 TestMultiNode/serial/AddNode 29.53
336 TestMultiNode/serial/MultiNodeLabels 0.08
337 TestMultiNode/serial/ProfileList 0.7
338 TestMultiNode/serial/CopyFile 10.34
339 TestMultiNode/serial/StopNode 2.46
340 TestMultiNode/serial/StartAfterStop 7.71
341 TestMultiNode/serial/RestartKeepsNodes 72.4
342 TestMultiNode/serial/DeleteNode 5.69
343 TestMultiNode/serial/StopMultiNode 24.22
344 TestMultiNode/serial/RestartMultiNode 49.9
345 TestMultiNode/serial/ValidateNameConflict 37.94
350 TestPreload 124.44
352 TestScheduledStopUnix 106.45
355 TestInsufficientStorage 12.28
356 TestRunningBinaryUpgrade 310.75
359 TestMissingContainerUpgrade 139.92
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
362 TestNoKubernetes/serial/StartWithK8s 44.18
363 TestNoKubernetes/serial/StartWithStopK8s 24.43
364 TestNoKubernetes/serial/Start 6.92
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
367 TestNoKubernetes/serial/ProfileList 0.7
368 TestNoKubernetes/serial/Stop 1.3
369 TestNoKubernetes/serial/StartNoArgs 6.54
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
371 TestStoppedBinaryUpgrade/Setup 0.92
372 TestStoppedBinaryUpgrade/Upgrade 299.73
373 TestStoppedBinaryUpgrade/MinikubeLogs 2.29
382 TestPause/serial/Start 49.58
383 TestPause/serial/SecondStartNoReconfiguration 6.24
384 TestPause/serial/Pause 0.77
385 TestPause/serial/VerifyStatus 0.33
386 TestPause/serial/Unpause 0.62
387 TestPause/serial/PauseAgain 0.85
388 TestPause/serial/DeletePaused 2.81
389 TestPause/serial/VerifyDeletedResources 0.5
397 TestNetworkPlugins/group/false 3.61
402 TestStartStop/group/old-k8s-version/serial/FirstStart 59.28
403 TestStartStop/group/old-k8s-version/serial/DeployApp 9.4
404 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.46
405 TestStartStop/group/old-k8s-version/serial/Stop 12.16
406 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
407 TestStartStop/group/old-k8s-version/serial/SecondStart 51.44
408 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
409 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
410 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
411 TestStartStop/group/old-k8s-version/serial/Pause 3.24
415 TestStartStop/group/embed-certs/serial/FirstStart 57.16
416 TestStartStop/group/embed-certs/serial/DeployApp 9.35
417 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.13
418 TestStartStop/group/embed-certs/serial/Stop 12.09
419 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
420 TestStartStop/group/embed-certs/serial/SecondStart 50.53
421 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.02
422 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.11
423 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
424 TestStartStop/group/embed-certs/serial/Pause 3.06
426 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 50.71
427 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.35
428 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.1
429 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.11
430 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
431 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.87
432 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
433 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.1
434 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
435 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.1
440 TestStartStop/group/no-preload/serial/Stop 1.31
441 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
443 TestStartStop/group/newest-cni/serial/DeployApp 0
445 TestStartStop/group/newest-cni/serial/Stop 1.34
446 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
449 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
450 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
451 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
453 TestNetworkPlugins/group/auto/Start 52.09
454 TestNetworkPlugins/group/auto/KubeletFlags 0.3
455 TestNetworkPlugins/group/auto/NetCatPod 9.27
456 TestNetworkPlugins/group/auto/DNS 0.19
457 TestNetworkPlugins/group/auto/Localhost 0.14
458 TestNetworkPlugins/group/auto/HairPin 0.15
459 TestNetworkPlugins/group/kindnet/Start 55.46
460 TestNetworkPlugins/group/kindnet/ControllerPod 6
461 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
462 TestNetworkPlugins/group/kindnet/NetCatPod 9.29
463 TestNetworkPlugins/group/kindnet/DNS 0.29
464 TestNetworkPlugins/group/kindnet/Localhost 0.16
465 TestNetworkPlugins/group/kindnet/HairPin 0.17
466 TestNetworkPlugins/group/calico/Start 56.65
467 TestNetworkPlugins/group/calico/ControllerPod 6.01
469 TestNetworkPlugins/group/calico/KubeletFlags 0.29
470 TestNetworkPlugins/group/calico/NetCatPod 9.25
471 TestNetworkPlugins/group/calico/DNS 0.18
472 TestNetworkPlugins/group/calico/Localhost 0.15
473 TestNetworkPlugins/group/calico/HairPin 0.19
474 TestNetworkPlugins/group/custom-flannel/Start 64.16
475 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
476 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.27
477 TestNetworkPlugins/group/custom-flannel/DNS 0.21
478 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
479 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
480 TestNetworkPlugins/group/enable-default-cni/Start 44.07
481 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
482 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.26
483 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
484 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
485 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
486 TestNetworkPlugins/group/flannel/Start 58.97
487 TestNetworkPlugins/group/bridge/Start 82.49
488 TestNetworkPlugins/group/flannel/ControllerPod 6
489 TestNetworkPlugins/group/flannel/KubeletFlags 0.4
490 TestNetworkPlugins/group/flannel/NetCatPod 11.35
491 TestNetworkPlugins/group/flannel/DNS 0.27
492 TestNetworkPlugins/group/flannel/Localhost 0.2
493 TestNetworkPlugins/group/flannel/HairPin 0.17
494 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
495 TestNetworkPlugins/group/bridge/NetCatPod 8.3
496 TestNetworkPlugins/group/bridge/DNS 0.18
497 TestNetworkPlugins/group/bridge/Localhost 0.15
498 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.28.0/json-events (5.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-645024 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-645024 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.94984556s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1217 10:22:09.393199 2924574 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1217 10:22:09.393276 2924574 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-645024
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-645024: exit status 85 (94.18601ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-645024 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-645024 │ jenkins │ v1.37.0 │ 17 Dec 25 10:22 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 10:22:03
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 10:22:03.487760 2924579 out.go:360] Setting OutFile to fd 1 ...
	I1217 10:22:03.488009 2924579 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:22:03.488037 2924579 out.go:374] Setting ErrFile to fd 2...
	I1217 10:22:03.488057 2924579 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:22:03.488340 2924579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	W1217 10:22:03.488547 2924579 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22182-2922712/.minikube/config/config.json: open /home/jenkins/minikube-integration/22182-2922712/.minikube/config/config.json: no such file or directory
	I1217 10:22:03.489029 2924579 out.go:368] Setting JSON to true
	I1217 10:22:03.489926 2924579 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":57874,"bootTime":1765909050,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 10:22:03.490018 2924579 start.go:143] virtualization:  
	I1217 10:22:03.496101 2924579 out.go:99] [download-only-645024] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1217 10:22:03.496324 2924579 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball: no such file or directory
	I1217 10:22:03.496529 2924579 notify.go:221] Checking for updates...
	I1217 10:22:03.500743 2924579 out.go:171] MINIKUBE_LOCATION=22182
	I1217 10:22:03.504296 2924579 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 10:22:03.507831 2924579 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:22:03.511178 2924579 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 10:22:03.514441 2924579 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1217 10:22:03.520734 2924579 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 10:22:03.521008 2924579 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 10:22:03.551647 2924579 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 10:22:03.551765 2924579 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:22:03.612827 2924579 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-12-17 10:22:03.603399317 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:22:03.612944 2924579 docker.go:319] overlay module found
	I1217 10:22:03.616150 2924579 out.go:99] Using the docker driver based on user configuration
	I1217 10:22:03.616186 2924579 start.go:309] selected driver: docker
	I1217 10:22:03.616194 2924579 start.go:927] validating driver "docker" against <nil>
	I1217 10:22:03.616307 2924579 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:22:03.670060 2924579 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-12-17 10:22:03.661036148 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:22:03.670221 2924579 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 10:22:03.670513 2924579 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1217 10:22:03.670679 2924579 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 10:22:03.673986 2924579 out.go:171] Using Docker driver with root privileges
	I1217 10:22:03.677116 2924579 cni.go:84] Creating CNI manager for ""
	I1217 10:22:03.677198 2924579 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1217 10:22:03.677211 2924579 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 10:22:03.677298 2924579 start.go:353] cluster config:
	{Name:download-only-645024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-645024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:22:03.680571 2924579 out.go:99] Starting "download-only-645024" primary control-plane node in "download-only-645024" cluster
	I1217 10:22:03.680605 2924579 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1217 10:22:03.683672 2924579 out.go:99] Pulling base image v0.0.48-1765661130-22141 ...
	I1217 10:22:03.683746 2924579 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1217 10:22:03.683835 2924579 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 10:22:03.699558 2924579 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 10:22:03.699743 2924579 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1217 10:22:03.699849 2924579 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 10:22:03.734539 2924579 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1217 10:22:03.734567 2924579 cache.go:65] Caching tarball of preloaded images
	I1217 10:22:03.734759 2924579 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1217 10:22:03.738373 2924579 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1217 10:22:03.738420 2924579 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1217 10:22:03.821922 2924579 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1217 10:22:03.822053 2924579 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1217 10:22:07.907035 2924579 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1217 10:22:07.907514 2924579 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/download-only-645024/config.json ...
	I1217 10:22:07.907574 2924579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/download-only-645024/config.json: {Name:mk6acac79cb3caa030070ee3a35cbfaf8da38742 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 10:22:07.907816 2924579 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1217 10:22:07.908076 2924579 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-645024 host does not exist
	  To start a cluster, run: "minikube start -p download-only-645024"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-645024
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/json-events (4.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-600362 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-600362 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.511110021s)
--- PASS: TestDownloadOnly/v1.34.3/json-events (4.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/preload-exists
I1217 10:22:14.360056 2924574 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
I1217 10:22:14.360090 2924574 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-600362
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-600362: exit status 85 (95.06763ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-645024 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-645024 │ jenkins │ v1.37.0 │ 17 Dec 25 10:22 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 10:22 UTC │ 17 Dec 25 10:22 UTC │
	│ delete  │ -p download-only-645024                                                                                                                                                               │ download-only-645024 │ jenkins │ v1.37.0 │ 17 Dec 25 10:22 UTC │ 17 Dec 25 10:22 UTC │
	│ start   │ -o=json --download-only -p download-only-600362 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-600362 │ jenkins │ v1.37.0 │ 17 Dec 25 10:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 10:22:09
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 10:22:09.890611 2924777 out.go:360] Setting OutFile to fd 1 ...
	I1217 10:22:09.890721 2924777 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:22:09.890731 2924777 out.go:374] Setting ErrFile to fd 2...
	I1217 10:22:09.890736 2924777 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:22:09.890992 2924777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 10:22:09.891394 2924777 out.go:368] Setting JSON to true
	I1217 10:22:09.892208 2924777 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":57880,"bootTime":1765909050,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 10:22:09.892276 2924777 start.go:143] virtualization:  
	I1217 10:22:09.895713 2924777 out.go:99] [download-only-600362] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 10:22:09.895991 2924777 notify.go:221] Checking for updates...
	I1217 10:22:09.899030 2924777 out.go:171] MINIKUBE_LOCATION=22182
	I1217 10:22:09.902315 2924777 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 10:22:09.905317 2924777 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:22:09.908263 2924777 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 10:22:09.911203 2924777 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1217 10:22:09.917048 2924777 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 10:22:09.917413 2924777 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 10:22:09.948449 2924777 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 10:22:09.948590 2924777 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:22:10.020476 2924777 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:51 SystemTime:2025-12-17 10:22:10.01020216 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:22:10.020636 2924777 docker.go:319] overlay module found
	I1217 10:22:10.023696 2924777 out.go:99] Using the docker driver based on user configuration
	I1217 10:22:10.023756 2924777 start.go:309] selected driver: docker
	I1217 10:22:10.023765 2924777 start.go:927] validating driver "docker" against <nil>
	I1217 10:22:10.023889 2924777 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:22:10.080403 2924777 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:51 SystemTime:2025-12-17 10:22:10.070765791 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:22:10.080579 2924777 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 10:22:10.080893 2924777 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1217 10:22:10.081062 2924777 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 10:22:10.084309 2924777 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-600362 host does not exist
	  To start a cluster, run: "minikube start -p download-only-600362"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.3/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.3/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-600362
--- PASS: TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/json-events (4.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-470267 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-470267 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.284023372s)
--- PASS: TestDownloadOnly/v1.35.0-rc.1/json-events (4.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/preload-exists
I1217 10:22:19.093820 2924574 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
I1217 10:22:19.093861 2924574 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-470267
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-470267: exit status 85 (89.048914ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                            ARGS                                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-645024 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd      │ download-only-645024 │ jenkins │ v1.37.0 │ 17 Dec 25 10:22 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                      │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 10:22 UTC │ 17 Dec 25 10:22 UTC │
	│ delete  │ -p download-only-645024                                                                                                                                                                    │ download-only-645024 │ jenkins │ v1.37.0 │ 17 Dec 25 10:22 UTC │ 17 Dec 25 10:22 UTC │
	│ start   │ -o=json --download-only -p download-only-600362 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd      │ download-only-600362 │ jenkins │ v1.37.0 │ 17 Dec 25 10:22 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                      │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 10:22 UTC │ 17 Dec 25 10:22 UTC │
	│ delete  │ -p download-only-600362                                                                                                                                                                    │ download-only-600362 │ jenkins │ v1.37.0 │ 17 Dec 25 10:22 UTC │ 17 Dec 25 10:22 UTC │
	│ start   │ -o=json --download-only -p download-only-470267 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-470267 │ jenkins │ v1.37.0 │ 17 Dec 25 10:22 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 10:22:14
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 10:22:14.862576 2924972 out.go:360] Setting OutFile to fd 1 ...
	I1217 10:22:14.862948 2924972 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:22:14.862983 2924972 out.go:374] Setting ErrFile to fd 2...
	I1217 10:22:14.863002 2924972 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:22:14.863299 2924972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 10:22:14.863781 2924972 out.go:368] Setting JSON to true
	I1217 10:22:14.864758 2924972 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":57885,"bootTime":1765909050,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 10:22:14.864869 2924972 start.go:143] virtualization:  
	I1217 10:22:14.868517 2924972 out.go:99] [download-only-470267] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 10:22:14.868775 2924972 notify.go:221] Checking for updates...
	I1217 10:22:14.871791 2924972 out.go:171] MINIKUBE_LOCATION=22182
	I1217 10:22:14.874979 2924972 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 10:22:14.878001 2924972 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:22:14.880993 2924972 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 10:22:14.884096 2924972 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1217 10:22:14.889849 2924972 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 10:22:14.890126 2924972 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 10:22:14.924956 2924972 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 10:22:14.925139 2924972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:22:14.980849 2924972 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-17 10:22:14.971262502 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:22:14.980959 2924972 docker.go:319] overlay module found
	I1217 10:22:14.984030 2924972 out.go:99] Using the docker driver based on user configuration
	I1217 10:22:14.984076 2924972 start.go:309] selected driver: docker
	I1217 10:22:14.984084 2924972 start.go:927] validating driver "docker" against <nil>
	I1217 10:22:14.984197 2924972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:22:15.050218 2924972 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-17 10:22:15.038394457 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:22:15.050393 2924972 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 10:22:15.050716 2924972 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1217 10:22:15.050879 2924972 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 10:22:15.054081 2924972 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-470267 host does not exist
	  To start a cluster, run: "minikube start -p download-only-470267"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-470267
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1217 10:22:20.420810 2924574 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-362962 --alsologtostderr --binary-mirror http://127.0.0.1:37809 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "binary-mirror-362962" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-362962
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-413632
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-413632: exit status 85 (69.079184ms)

                                                
                                                
-- stdout --
	* Profile "addons-413632" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-413632"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-413632
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-413632: exit status 85 (74.168239ms)

                                                
                                                
-- stdout --
	* Profile "addons-413632" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-413632"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (126.94s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-413632 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-413632 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m6.939762029s)
--- PASS: TestAddons/Setup (126.94s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.73s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:886: volcano-controller stabilized in 57.429876ms
addons_test.go:870: volcano-scheduler stabilized in 57.775518ms
addons_test.go:878: volcano-admission stabilized in 58.039913ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-76c996c8bf-4fb8p" [7f0546f4-7f04-4c86-b163-3b9ecce83d3b] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004738931s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-6c447bd768-4sdq6" [7d18b48a-9cf6-4919-ab3e-13fb278d3063] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00342278s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-6fd4f85cb8-wlllp" [62ca8d53-a9ba-4b81-ad62-3e136940242e] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003518306s
addons_test.go:905: (dbg) Run:  kubectl --context addons-413632 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-413632 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-413632 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [80738277-f54a-44eb-b068-0209b4ede1f4] Pending
helpers_test.go:353: "test-job-nginx-0" [80738277-f54a-44eb-b068-0209b4ede1f4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [80738277-f54a-44eb-b068-0209b4ede1f4] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.004210156s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-413632 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-413632 addons disable volcano --alsologtostderr -v=1: (12.083617727s)
--- PASS: TestAddons/serial/Volcano (40.73s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-413632 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-413632 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.89s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-413632 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-413632 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [3f748a85-ff69-427c-b2f5-97a0c724df05] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [3f748a85-ff69-427c-b2f5-97a0c724df05] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004173734s
addons_test.go:696: (dbg) Run:  kubectl --context addons-413632 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-413632 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-413632 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-413632 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.89s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 3.330343ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-7rcqr" [b848619c-8368-4412-8960-6feeba7ecfaf] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003365692s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-dzmr7" [bfb43fa8-501f-42b5-98d5-5d485303fcf8] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.06451012s
addons_test.go:394: (dbg) Run:  kubectl --context addons-413632 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-413632 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-413632 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.337574439s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-413632 ip
2025/12/17 10:25:42 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-413632 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.44s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.77s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.42013ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-413632
addons_test.go:334: (dbg) Run:  kubectl --context addons-413632 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-413632 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.77s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-413632 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-413632 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-413632 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [016ee9fe-aedd-4b11-9183-9f7ffcb98ed0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [016ee9fe-aedd-4b11-9183-9f7ffcb98ed0] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.003574668s
I1217 10:26:51.050900 2924574 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-413632 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-413632 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-413632 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-413632 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-413632 addons disable ingress-dns --alsologtostderr -v=1: (1.951928714s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-413632 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-413632 addons disable ingress --alsologtostderr -v=1: (7.783391567s)
--- PASS: TestAddons/parallel/Ingress (18.43s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-nf8pm" [009c9919-9087-4b32-8e39-5fb205be660e] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006553817s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-413632 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-413632 addons disable inspektor-gadget --alsologtostderr -v=1: (5.992501132s)
--- PASS: TestAddons/parallel/InspektorGadget (11.00s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.81s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 4.257788ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-plh59" [82f4705a-8550-4796-96ec-0b0d4df1da3c] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003926546s
addons_test.go:465: (dbg) Run:  kubectl --context addons-413632 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-413632 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.81s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.05s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1217 10:26:07.421146 2924574 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1217 10:26:07.425581 2924574 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1217 10:26:07.425608 2924574 kapi.go:107] duration metric: took 8.727831ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 8.738982ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-413632 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-413632 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [569ce9f9-f936-4950-aee0-018ca7264e14] Pending
helpers_test.go:353: "task-pv-pod" [569ce9f9-f936-4950-aee0-018ca7264e14] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [569ce9f9-f936-4950-aee0-018ca7264e14] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.007905418s
addons_test.go:574: (dbg) Run:  kubectl --context addons-413632 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-413632 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-413632 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-413632 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-413632 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-413632 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-413632 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [7a91d6fc-24e3-4c52-8680-da6001920f86] Pending
helpers_test.go:353: "task-pv-pod-restore" [7a91d6fc-24e3-4c52-8680-da6001920f86] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [7a91d6fc-24e3-4c52-8680-da6001920f86] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003712185s
addons_test.go:616: (dbg) Run:  kubectl --context addons-413632 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-413632 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-413632 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-413632 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-413632 addons disable volumesnapshots --alsologtostderr -v=1: (1.00137399s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-413632 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-413632 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.907781015s)
--- PASS: TestAddons/parallel/CSI (40.05s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-413632 --alsologtostderr -v=1
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-tgg79" [9b73c44e-2ad5-40c1-9880-397434802867] Pending
helpers_test.go:353: "headlamp-dfcdc64b-tgg79" [9b73c44e-2ad5-40c1-9880-397434802867] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-tgg79" [9b73c44e-2ad5-40c1-9880-397434802867] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004220036s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-413632 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-413632 addons disable headlamp --alsologtostderr -v=1: (5.798069607s)
--- PASS: TestAddons/parallel/Headlamp (17.77s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.18s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-srsl9" [709dba4a-d6a0-4857-b113-b26660470d69] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004403902s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-413632 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-413632 addons disable cloud-spanner --alsologtostderr -v=1: (1.174484042s)
--- PASS: TestAddons/parallel/CloudSpanner (6.18s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.91s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-413632 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-413632 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-413632 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-413632 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-413632 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-413632 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-413632 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [23d938e3-a8ef-419a-9afb-92924f1a9c67] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [23d938e3-a8ef-419a-9afb-92924f1a9c67] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [23d938e3-a8ef-419a-9afb-92924f1a9c67] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003843675s
addons_test.go:969: (dbg) Run:  kubectl --context addons-413632 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-413632 ssh "cat /opt/local-path-provisioner/pvc-28973b24-7468-45ec-b879-42daf997ba7b_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-413632 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-413632 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-413632 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-413632 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.617529366s)
--- PASS: TestAddons/parallel/LocalPath (51.91s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-jrxk9" [17ed177c-d6b4-4987-b6d8-1edaf04187c9] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003137547s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-413632 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-6654c87f9b-6dbks" [d0a0eb58-3f30-4c1b-b1ff-bb6270724fa8] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004215442s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-413632 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-413632 addons disable yakd --alsologtostderr -v=1: (5.815211725s)
--- PASS: TestAddons/parallel/Yakd (10.82s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.36s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-413632
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-413632: (12.078092941s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-413632
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-413632
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-413632
--- PASS: TestAddons/StoppedEnableDisable (12.36s)

                                                
                                    
x
+
TestCertOptions (39.68s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-117283 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-117283 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (36.681123681s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-117283 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-117283 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-117283 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-117283" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-117283
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-117283: (2.172657246s)
--- PASS: TestCertOptions (39.68s)

                                                
                                    
x
+
TestCertExpiration (231.19s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-182607 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E1217 11:42:06.157427 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-182607 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (38.9795858s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-182607 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-182607 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.468577948s)
helpers_test.go:176: Cleaning up "cert-expiration-182607" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-182607
E1217 11:45:43.083131 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-182607: (3.739536851s)
--- PASS: TestCertExpiration (231.19s)

                                                
                                    
x
+
TestForceSystemdFlag (35.54s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-993722 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-993722 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (33.140747205s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-993722 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-flag-993722" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-993722
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-993722: (2.10823042s)
--- PASS: TestForceSystemdFlag (35.54s)

                                                
                                    
x
+
TestForceSystemdEnv (39.19s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-085980 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-085980 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.115763351s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-085980 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-env-085980" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-085980
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-085980: (2.570210149s)
--- PASS: TestForceSystemdEnv (39.19s)

                                                
                                    
x
+
TestDockerEnvContainerd (46.3s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-134985 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-134985 --driver=docker  --container-runtime=containerd: (30.95498578s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-134985"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-134985": (1.082318372s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-S8mwgJvdDp7R/agent.2943906" SSH_AGENT_PID="2943907" DOCKER_HOST=ssh://docker@127.0.0.1:35718 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-S8mwgJvdDp7R/agent.2943906" SSH_AGENT_PID="2943907" DOCKER_HOST=ssh://docker@127.0.0.1:35718 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-S8mwgJvdDp7R/agent.2943906" SSH_AGENT_PID="2943907" DOCKER_HOST=ssh://docker@127.0.0.1:35718 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.278360898s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-S8mwgJvdDp7R/agent.2943906" SSH_AGENT_PID="2943907" DOCKER_HOST=ssh://docker@127.0.0.1:35718 docker image ls"
helpers_test.go:176: Cleaning up "dockerenv-134985" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-134985
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-134985: (2.074271983s)
--- PASS: TestDockerEnvContainerd (46.30s)

                                                
                                    
x
+
TestErrorSpam/setup (31.26s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-217684 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-217684 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-217684 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-217684 --driver=docker  --container-runtime=containerd: (31.257023315s)
--- PASS: TestErrorSpam/setup (31.26s)

                                                
                                    
x
+
TestErrorSpam/start (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217684 --log_dir /tmp/nospam-217684 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217684 --log_dir /tmp/nospam-217684 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217684 --log_dir /tmp/nospam-217684 start --dry-run
--- PASS: TestErrorSpam/start (0.82s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217684 --log_dir /tmp/nospam-217684 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217684 --log_dir /tmp/nospam-217684 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217684 --log_dir /tmp/nospam-217684 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217684 --log_dir /tmp/nospam-217684 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217684 --log_dir /tmp/nospam-217684 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217684 --log_dir /tmp/nospam-217684 pause
--- PASS: TestErrorSpam/pause (1.81s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217684 --log_dir /tmp/nospam-217684 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217684 --log_dir /tmp/nospam-217684 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217684 --log_dir /tmp/nospam-217684 unpause
--- PASS: TestErrorSpam/unpause (1.87s)

                                                
                                    
x
+
TestErrorSpam/stop (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217684 --log_dir /tmp/nospam-217684 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-217684 --log_dir /tmp/nospam-217684 stop: (1.429771049s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217684 --log_dir /tmp/nospam-217684 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217684 --log_dir /tmp/nospam-217684 stop
--- PASS: TestErrorSpam/stop (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.6s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-626013 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1217 10:29:28.209536 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:29:28.216001 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:29:28.227474 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:29:28.248938 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:29:28.290436 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:29:28.371924 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:29:28.533424 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:29:28.855092 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:29:29.497124 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:29:30.778484 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:29:33.340039 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-626013 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (48.597429164s)
--- PASS: TestFunctional/serial/StartWithProxy (48.60s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.18s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1217 10:29:37.709640 2924574 config.go:182] Loaded profile config "functional-626013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-626013 --alsologtostderr -v=8
E1217 10:29:38.462033 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-626013 --alsologtostderr -v=8: (7.174152757s)
functional_test.go:678: soft start took 7.175696327s for "functional-626013" cluster.
I1217 10:29:44.884092 2924574 config.go:182] Loaded profile config "functional-626013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/SoftStart (7.18s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-626013 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-626013 cache add registry.k8s.io/pause:3.1: (1.318793985s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-626013 cache add registry.k8s.io/pause:3.3: (1.152572563s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-626013 cache add registry.k8s.io/pause:latest: (1.047647665s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-626013 /tmp/TestFunctionalserialCacheCmdcacheadd_local3350796910/001
E1217 10:29:48.704266 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 cache add minikube-local-cache-test:functional-626013
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 cache delete minikube-local-cache-test:functional-626013
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-626013
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-626013 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (285.32428ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 kubectl -- --context functional-626013 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-626013 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.52s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-626013 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1217 10:30:09.185669 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-626013 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.51764288s)
functional_test.go:776: restart took 41.51775093s for "functional-626013" cluster.
I1217 10:30:33.996822 2924574 config.go:182] Loaded profile config "functional-626013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/ExtraConfig (41.52s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-626013 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-626013 logs: (1.51974889s)
--- PASS: TestFunctional/serial/LogsCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 logs --file /tmp/TestFunctionalserialLogsFileCmd1058087384/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-626013 logs --file /tmp/TestFunctionalserialLogsFileCmd1058087384/001/logs.txt: (1.505521347s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.05s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-626013 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-626013
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-626013: exit status 115 (400.627201ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31238 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-626013 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-626013 config get cpus: exit status 14 (74.2152ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-626013 config get cpus: exit status 14 (75.182168ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-626013 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-626013 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 2959323: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.84s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-626013 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-626013 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (207.830828ms)

                                                
                                                
-- stdout --
	* [functional-626013] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 10:31:11.484811 2958790 out.go:360] Setting OutFile to fd 1 ...
	I1217 10:31:11.484965 2958790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:31:11.484992 2958790 out.go:374] Setting ErrFile to fd 2...
	I1217 10:31:11.485010 2958790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:31:11.485317 2958790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 10:31:11.485732 2958790 out.go:368] Setting JSON to false
	I1217 10:31:11.486711 2958790 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":58422,"bootTime":1765909050,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 10:31:11.486777 2958790 start.go:143] virtualization:  
	I1217 10:31:11.490159 2958790 out.go:179] * [functional-626013] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 10:31:11.494103 2958790 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 10:31:11.494182 2958790 notify.go:221] Checking for updates...
	I1217 10:31:11.500048 2958790 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 10:31:11.502883 2958790 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:31:11.505898 2958790 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 10:31:11.508753 2958790 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 10:31:11.511748 2958790 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 10:31:11.515108 2958790 config.go:182] Loaded profile config "functional-626013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1217 10:31:11.515691 2958790 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 10:31:11.544118 2958790 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 10:31:11.544267 2958790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:31:11.618979 2958790 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-17 10:31:11.609116808 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:31:11.619086 2958790 docker.go:319] overlay module found
	I1217 10:31:11.622221 2958790 out.go:179] * Using the docker driver based on existing profile
	I1217 10:31:11.625050 2958790 start.go:309] selected driver: docker
	I1217 10:31:11.625075 2958790 start.go:927] validating driver "docker" against &{Name:functional-626013 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-626013 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:31:11.625186 2958790 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 10:31:11.628800 2958790 out.go:203] 
	W1217 10:31:11.631611 2958790 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 10:31:11.634518 2958790 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-626013 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-626013 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-626013 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (224.790255ms)

                                                
                                                
-- stdout --
	* [functional-626013] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 10:31:11.258156 2958688 out.go:360] Setting OutFile to fd 1 ...
	I1217 10:31:11.258377 2958688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:31:11.258401 2958688 out.go:374] Setting ErrFile to fd 2...
	I1217 10:31:11.258420 2958688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 10:31:11.259455 2958688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 10:31:11.260402 2958688 out.go:368] Setting JSON to false
	I1217 10:31:11.261623 2958688 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":58422,"bootTime":1765909050,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 10:31:11.261721 2958688 start.go:143] virtualization:  
	I1217 10:31:11.265209 2958688 out.go:179] * [functional-626013] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1217 10:31:11.268953 2958688 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 10:31:11.269094 2958688 notify.go:221] Checking for updates...
	I1217 10:31:11.276412 2958688 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 10:31:11.279493 2958688 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 10:31:11.282604 2958688 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 10:31:11.285548 2958688 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 10:31:11.288541 2958688 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 10:31:11.291998 2958688 config.go:182] Loaded profile config "functional-626013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1217 10:31:11.292719 2958688 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 10:31:11.334735 2958688 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 10:31:11.334855 2958688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 10:31:11.411310 2958688 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-17 10:31:11.401717189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 10:31:11.411410 2958688 docker.go:319] overlay module found
	I1217 10:31:11.414470 2958688 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1217 10:31:11.417206 2958688 start.go:309] selected driver: docker
	I1217 10:31:11.417229 2958688 start.go:927] validating driver "docker" against &{Name:functional-626013 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-626013 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 10:31:11.417385 2958688 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 10:31:11.420864 2958688 out.go:203] 
	W1217 10:31:11.423729 2958688 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 10:31:11.426581 2958688 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-626013 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-626013 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-56xk8" [73479d68-de19-4283-9445-f5eca481fb18] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-56xk8" [73479d68-de19-4283-9445-f5eca481fb18] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.017947085s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30621
functional_test.go:1680: http://192.168.49.2:30621: success! body:
Request served by hello-node-connect-7d85dfc575-56xk8

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30621
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (19.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [fa3b2594-5c2b-401c-992f-efba538b6b36] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004569308s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-626013 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-626013 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-626013 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-626013 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [0e33fd72-2dca-45b6-a5d1-002703a3bc03] Pending
E1217 10:30:50.147110 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "sp-pod" [0e33fd72-2dca-45b6-a5d1-002703a3bc03] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003448207s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-626013 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-626013 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-626013 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [f6974393-d8b8-474b-a8d7-98097d509418] Pending
helpers_test.go:353: "sp-pod" [f6974393-d8b8-474b-a8d7-98097d509418] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.005124842s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-626013 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (19.91s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh -n functional-626013 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 cp functional-626013:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4103200657/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh -n functional-626013 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh -n functional-626013 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/2924574/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh "sudo cat /etc/test/nested/copy/2924574/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/2924574.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh "sudo cat /etc/ssl/certs/2924574.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/2924574.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh "sudo cat /usr/share/ca-certificates/2924574.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/29245742.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh "sudo cat /etc/ssl/certs/29245742.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/29245742.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh "sudo cat /usr/share/ca-certificates/29245742.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-626013 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-626013 ssh "sudo systemctl is-active docker": exit status 1 (550.863817ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-626013 ssh "sudo systemctl is-active crio": exit status 1 (378.657723ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-626013 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-626013 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-626013 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-626013 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 2956330: os: process already finished
helpers_test.go:520: unable to terminate pid 2956120: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-626013 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-626013 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [d7342bbc-6ab6-456c-9aea-26795ccc5b91] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [d7342bbc-6ab6-456c-9aea-26795ccc5b91] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004043634s
I1217 10:30:51.592807 2924574 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-626013 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.11.119 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-626013 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-626013 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-626013 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-mfzjr" [c5fa0a2e-e179-4e3b-9370-c671782dd812] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-mfzjr" [c5fa0a2e-e179-4e3b-9370-c671782dd812] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003511486s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "377.225494ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "56.143471ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "359.654028ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "58.305135ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-626013 /tmp/TestFunctionalparallelMountCmdany-port891088831/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765967464912890087" to /tmp/TestFunctionalparallelMountCmdany-port891088831/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765967464912890087" to /tmp/TestFunctionalparallelMountCmdany-port891088831/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765967464912890087" to /tmp/TestFunctionalparallelMountCmdany-port891088831/001/test-1765967464912890087
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-626013 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (349.809874ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 10:31:05.264673 2924574 retry.go:31] will retry after 603.819168ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 10:31 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 10:31 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 10:31 test-1765967464912890087
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh cat /mount-9p/test-1765967464912890087
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-626013 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [c1ddc2b2-7e98-4d73-b028-8dff38249370] Pending
helpers_test.go:353: "busybox-mount" [c1ddc2b2-7e98-4d73-b028-8dff38249370] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [c1ddc2b2-7e98-4d73-b028-8dff38249370] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [c1ddc2b2-7e98-4d73-b028-8dff38249370] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003852253s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-626013 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-626013 /tmp/TestFunctionalparallelMountCmdany-port891088831/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 service list -o json
functional_test.go:1504: Took "514.893115ms" to run "out/minikube-linux-arm64 -p functional-626013 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32309
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32309
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-626013 /tmp/TestFunctionalparallelMountCmdspecific-port3356938701/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-626013 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (561.532379ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 10:31:13.907564 2924574 retry.go:31] will retry after 529.057796ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-626013 /tmp/TestFunctionalparallelMountCmdspecific-port3356938701/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-626013 ssh "sudo umount -f /mount-9p": exit status 1 (336.371178ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-626013 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-626013 /tmp/TestFunctionalparallelMountCmdspecific-port3356938701/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-626013 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2511922862/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-626013 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2511922862/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-626013 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2511922862/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-626013 ssh "findmnt -T" /mount1: exit status 1 (889.423721ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 10:31:16.544527 2924574 retry.go:31] will retry after 534.240716ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-626013 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-626013 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2511922862/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-626013 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2511922862/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-626013 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2511922862/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-626013 version -o=json --components: (1.405886973s)
--- PASS: TestFunctional/parallel/Version/components (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-626013 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-626013
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-626013
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-626013 image ls --format short --alsologtostderr:
I1217 10:31:26.423414 2961817 out.go:360] Setting OutFile to fd 1 ...
I1217 10:31:26.423592 2961817 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 10:31:26.423602 2961817 out.go:374] Setting ErrFile to fd 2...
I1217 10:31:26.423607 2961817 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 10:31:26.423895 2961817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
I1217 10:31:26.424560 2961817 config.go:182] Loaded profile config "functional-626013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1217 10:31:26.424683 2961817 config.go:182] Loaded profile config "functional-626013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1217 10:31:26.425197 2961817 cli_runner.go:164] Run: docker container inspect functional-626013 --format={{.State.Status}}
I1217 10:31:26.455554 2961817 ssh_runner.go:195] Run: systemctl --version
I1217 10:31:26.455688 2961817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-626013
I1217 10:31:26.507663 2961817 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35728 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-626013/id_ed25519 Username:docker}
I1217 10:31:26.609260 2961817 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-626013 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                    IMAGE                    │                  TAG                  │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b                    │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/kindest/kindnetd                  │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ sha256:c96ee3 │ 38.5MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                                    │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/pause                       │ 3.1                                   │ sha256:8057e0 │ 262kB  │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc                          │ sha256:1611cd │ 1.94MB │
│ public.ecr.aws/nginx/nginx                  │ alpine                                │ sha256:10afed │ 23MB   │
│ registry.k8s.io/pause                       │ 3.10.1                                │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ 3.3                                   │ sha256:3d1873 │ 249kB  │
│ registry.k8s.io/pause                       │ latest                                │ sha256:8cb209 │ 71.3kB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1                               │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.3                               │ sha256:4461da │ 22.8MB │
│ docker.io/library/minikube-local-cache-test │ functional-626013                     │ sha256:139b28 │ 992B   │
│ docker.io/kicbase/echo-server               │ functional-626013                     │ sha256:ce2d2c │ 2.17MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0                               │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.3                               │ sha256:cf65ae │ 24.6MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.3                               │ sha256:7ada8f │ 20.7MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.3                               │ sha256:2f2aa2 │ 15.8MB │
└─────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-626013 image ls --format table --alsologtostderr:
I1217 10:31:27.031467 2961987 out.go:360] Setting OutFile to fd 1 ...
I1217 10:31:27.031726 2961987 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 10:31:27.031770 2961987 out.go:374] Setting ErrFile to fd 2...
I1217 10:31:27.031791 2961987 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 10:31:27.032208 2961987 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
I1217 10:31:27.033007 2961987 config.go:182] Loaded profile config "functional-626013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1217 10:31:27.033176 2961987 config.go:182] Loaded profile config "functional-626013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1217 10:31:27.033824 2961987 cli_runner.go:164] Run: docker container inspect functional-626013 --format={{.State.Status}}
I1217 10:31:27.066090 2961987 ssh_runner.go:195] Run: systemctl --version
I1217 10:31:27.066155 2961987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-626013
I1217 10:31:27.100118 2961987 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35728 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-626013/id_ed25519 Username:docker}
I1217 10:31:27.218318 2961987 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-626013 image ls --format json --alsologtostderr:
[{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-626013"],"size":"2173567"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162","repoDigests":["registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.3"],"size":"22804272"},{"id":"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6","repoDigests":["registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.3"],"size":"15776215"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae"],"repoTags":["docker.io/kindest/
kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"38502448"},{"id":"sha256:139b28e7c45f6120a651876f7db60c8dc8c2da89658d2cb729b8871bf45e8e9c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-626013"],"size":"992"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"229857
59"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21136588"},{"id":"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.3"],"size":"24567639"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":[
"registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.3"],"size":"20719958"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-626013 image ls --format json --alsologtostderr:
I1217 10:31:26.766288 2961890 out.go:360] Setting OutFile to fd 1 ...
I1217 10:31:26.773669 2961890 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 10:31:26.773722 2961890 out.go:374] Setting ErrFile to fd 2...
I1217 10:31:26.773731 2961890 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 10:31:26.774097 2961890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
I1217 10:31:26.774874 2961890 config.go:182] Loaded profile config "functional-626013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1217 10:31:26.775021 2961890 config.go:182] Loaded profile config "functional-626013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1217 10:31:26.775624 2961890 cli_runner.go:164] Run: docker container inspect functional-626013 --format={{.State.Status}}
I1217 10:31:26.798562 2961890 ssh_runner.go:195] Run: systemctl --version
I1217 10:31:26.798618 2961890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-626013
I1217 10:31:26.818668 2961890 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35728 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-626013/id_ed25519 Username:docker}
I1217 10:31:26.931260 2961890 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-626013 image ls --format yaml --alsologtostderr:
- id: sha256:139b28e7c45f6120a651876f7db60c8dc8c2da89658d2cb729b8871bf45e8e9c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-626013
size: "992"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"
- id: sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.3
size: "24567639"
- id: sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.3
size: "20719958"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "22985759"
- id: sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162
repoDigests:
- registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6
repoTags:
- registry.k8s.io/kube-proxy:v1.34.3
size: "22804272"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-626013
size: "2173567"
- id: sha256:c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "38502448"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.3
size: "15776215"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-626013 image ls --format yaml --alsologtostderr:
I1217 10:31:26.433924 2961818 out.go:360] Setting OutFile to fd 1 ...
I1217 10:31:26.434495 2961818 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 10:31:26.434530 2961818 out.go:374] Setting ErrFile to fd 2...
I1217 10:31:26.434554 2961818 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 10:31:26.434917 2961818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
I1217 10:31:26.435598 2961818 config.go:182] Loaded profile config "functional-626013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1217 10:31:26.435775 2961818 config.go:182] Loaded profile config "functional-626013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1217 10:31:26.438203 2961818 cli_runner.go:164] Run: docker container inspect functional-626013 --format={{.State.Status}}
I1217 10:31:26.489068 2961818 ssh_runner.go:195] Run: systemctl --version
I1217 10:31:26.489128 2961818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-626013
I1217 10:31:26.514083 2961818 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35728 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-626013/id_ed25519 Username:docker}
I1217 10:31:26.619017 2961818 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-626013 ssh pgrep buildkitd: exit status 1 (365.660132ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 image build -t localhost/my-image:functional-626013 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-626013 image build -t localhost/my-image:functional-626013 testdata/build --alsologtostderr: (3.476089024s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-626013 image build -t localhost/my-image:functional-626013 testdata/build --alsologtostderr:
I1217 10:31:27.095438 2961993 out.go:360] Setting OutFile to fd 1 ...
I1217 10:31:27.112518 2961993 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 10:31:27.112583 2961993 out.go:374] Setting ErrFile to fd 2...
I1217 10:31:27.112608 2961993 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 10:31:27.112922 2961993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
I1217 10:31:27.113688 2961993 config.go:182] Loaded profile config "functional-626013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1217 10:31:27.116279 2961993 config.go:182] Loaded profile config "functional-626013": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1217 10:31:27.116944 2961993 cli_runner.go:164] Run: docker container inspect functional-626013 --format={{.State.Status}}
I1217 10:31:27.151590 2961993 ssh_runner.go:195] Run: systemctl --version
I1217 10:31:27.151646 2961993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-626013
I1217 10:31:27.174055 2961993 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35728 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-626013/id_ed25519 Username:docker}
I1217 10:31:27.275063 2961993 build_images.go:162] Building image from path: /tmp/build.164147927.tar
I1217 10:31:27.275143 2961993 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 10:31:27.283431 2961993 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.164147927.tar
I1217 10:31:27.287001 2961993 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.164147927.tar: stat -c "%s %y" /var/lib/minikube/build/build.164147927.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.164147927.tar': No such file or directory
I1217 10:31:27.287032 2961993 ssh_runner.go:362] scp /tmp/build.164147927.tar --> /var/lib/minikube/build/build.164147927.tar (3072 bytes)
I1217 10:31:27.306494 2961993 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.164147927
I1217 10:31:27.314324 2961993 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.164147927 -xf /var/lib/minikube/build/build.164147927.tar
I1217 10:31:27.323156 2961993 containerd.go:394] Building image: /var/lib/minikube/build/build.164147927
I1217 10:31:27.323262 2961993 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.164147927 --local dockerfile=/var/lib/minikube/build/build.164147927 --output type=image,name=localhost/my-image:functional-626013
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:4cf1b1cf7f48133a7e246d041068838df28984f1c91793013acdb8dca3689967 0.0s done
#8 exporting config sha256:288482aae568d040d6b694d6e16dc2f89d54b3708b47a5eb3cfb790f04192323 0.0s done
#8 naming to localhost/my-image:functional-626013 done
#8 DONE 0.2s
I1217 10:31:30.476484 2961993 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.164147927 --local dockerfile=/var/lib/minikube/build/build.164147927 --output type=image,name=localhost/my-image:functional-626013: (3.153118652s)
I1217 10:31:30.476584 2961993 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.164147927
I1217 10:31:30.485538 2961993 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.164147927.tar
I1217 10:31:30.493459 2961993 build_images.go:218] Built localhost/my-image:functional-626013 from /tmp/build.164147927.tar
I1217 10:31:30.493493 2961993 build_images.go:134] succeeded building to: functional-626013
I1217 10:31:30.493499 2961993 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-626013
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 image load --daemon kicbase/echo-server:functional-626013 --alsologtostderr
2025/12/17 10:31:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 image load --daemon kicbase/echo-server:functional-626013 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-626013 image load --daemon kicbase/echo-server:functional-626013 --alsologtostderr: (1.055779506s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-626013
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 image load --daemon kicbase/echo-server:functional-626013 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-arm64 -p functional-626013 image load --daemon kicbase/echo-server:functional-626013 --alsologtostderr: (1.012175953s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 image save kicbase/echo-server:functional-626013 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 image rm kicbase/echo-server:functional-626013 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-626013
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-626013 image save --daemon kicbase/echo-server:functional-626013 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-626013
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-626013
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-626013
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-626013
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-232588 cache add registry.k8s.io/pause:3.1: (1.14172542s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-232588 cache add registry.k8s.io/pause:3.3: (1.096300611s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-232588 cache add registry.k8s.io/pause:latest: (1.060345983s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialCacheC3808356910/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 cache add minikube-local-cache-test:functional-232588
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 cache delete minikube-local-cache-test:functional-232588
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-232588
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232588 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (273.68774ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (0.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (0.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (0.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi1184318673/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (0.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232588 config get cpus: exit status 14 (86.318287ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232588 config get cpus: exit status 14 (69.900354ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-232588 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-232588 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 23 (212.193598ms)

                                                
                                                
-- stdout --
	* [functional-232588] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:00:50.940506 2991422 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:00:50.940657 2991422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:00:50.940670 2991422 out.go:374] Setting ErrFile to fd 2...
	I1217 11:00:50.940687 2991422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:00:50.941073 2991422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 11:00:50.941717 2991422 out.go:368] Setting JSON to false
	I1217 11:00:50.942833 2991422 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":60201,"bootTime":1765909050,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 11:00:50.942906 2991422 start.go:143] virtualization:  
	I1217 11:00:50.946076 2991422 out.go:179] * [functional-232588] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 11:00:50.949882 2991422 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 11:00:50.949960 2991422 notify.go:221] Checking for updates...
	I1217 11:00:50.955819 2991422 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:00:50.958790 2991422 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 11:00:50.961618 2991422 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 11:00:50.964548 2991422 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 11:00:50.967469 2991422 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:00:50.971025 2991422 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:00:50.971624 2991422 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:00:51.007439 2991422 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 11:00:51.007601 2991422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:00:51.073448 2991422 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:00:51.062986923 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:00:51.073557 2991422 docker.go:319] overlay module found
	I1217 11:00:51.076662 2991422 out.go:179] * Using the docker driver based on existing profile
	I1217 11:00:51.079668 2991422 start.go:309] selected driver: docker
	I1217 11:00:51.079693 2991422 start.go:927] validating driver "docker" against &{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:00:51.079862 2991422 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:00:51.083423 2991422 out.go:203] 
	W1217 11:00:51.086488 2991422 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 11:00:51.089399 2991422 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-232588 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-232588 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-232588 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 23 (225.502113ms)

                                                
                                                
-- stdout --
	* [functional-232588] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:00:50.708202 2991369 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:00:50.708345 2991369 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:00:50.708384 2991369 out.go:374] Setting ErrFile to fd 2...
	I1217 11:00:50.708396 2991369 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:00:50.708791 2991369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 11:00:50.709229 2991369 out.go:368] Setting JSON to false
	I1217 11:00:50.710083 2991369 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":60201,"bootTime":1765909050,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 11:00:50.710153 2991369 start.go:143] virtualization:  
	I1217 11:00:50.715449 2991369 out.go:179] * [functional-232588] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1217 11:00:50.718376 2991369 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 11:00:50.718479 2991369 notify.go:221] Checking for updates...
	I1217 11:00:50.724154 2991369 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:00:50.727101 2991369 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 11:00:50.729898 2991369 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 11:00:50.732742 2991369 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 11:00:50.735476 2991369 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:00:50.738979 2991369 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:00:50.739679 2991369 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:00:50.775716 2991369 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 11:00:50.775845 2991369 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:00:50.861883 2991369 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:00:50.852351703 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:00:50.862005 2991369 docker.go:319] overlay module found
	I1217 11:00:50.865049 2991369 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1217 11:00:50.867975 2991369 start.go:309] selected driver: docker
	I1217 11:00:50.868001 2991369 start.go:927] validating driver "docker" against &{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:00:50.868103 2991369 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:00:50.871718 2991369 out.go:203] 
	W1217 11:00:50.874659 2991369 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 11:00:50.877473 2991369 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (2.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh -n functional-232588 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 cp functional-232588:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm425854852/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh -n functional-232588 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh -n functional-232588 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (2.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/2924574/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh "sudo cat /etc/test/nested/copy/2924574/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/2924574.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh "sudo cat /etc/ssl/certs/2924574.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/2924574.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh "sudo cat /usr/share/ca-certificates/2924574.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/29245742.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh "sudo cat /etc/ssl/certs/29245742.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/29245742.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh "sudo cat /usr/share/ca-certificates/29245742.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232588 ssh "sudo systemctl is-active docker": exit status 1 (262.029041ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232588 ssh "sudo systemctl is-active crio": exit status 1 (327.856397ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-232588 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-232588 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "357.809659ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "62.162631ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "333.811896ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "51.866793ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (2.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun828762534/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232588 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (323.574995ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 11:00:44.476329 2924574 retry.go:31] will retry after 619.530871ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun828762534/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232588 ssh "sudo umount -f /mount-9p": exit status 1 (263.868862ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-232588 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun828762534/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (2.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3882644013/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3882644013/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3882644013/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-232588 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3882644013/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3882644013/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-232588 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3882644013/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-232588 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-rc.1
registry.k8s.io/kube-proxy:v1.35.0-rc.1
registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
registry.k8s.io/kube-apiserver:v1.35.0-rc.1
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-232588
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-232588
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-232588 image ls --format short --alsologtostderr:
I1217 11:01:03.814768 2993599 out.go:360] Setting OutFile to fd 1 ...
I1217 11:01:03.814904 2993599 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:01:03.814915 2993599 out.go:374] Setting ErrFile to fd 2...
I1217 11:01:03.814921 2993599 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:01:03.815183 2993599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
I1217 11:01:03.815829 2993599 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1217 11:01:03.815954 2993599 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1217 11:01:03.816510 2993599 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
I1217 11:01:03.834193 2993599 ssh_runner.go:195] Run: systemctl --version
I1217 11:01:03.834259 2993599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
I1217 11:01:03.852023 2993599 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
I1217 11:01:03.947281 2993599 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-232588 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-rc.1       │ sha256:a34b34 │ 20.7MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-rc.1       │ sha256:abca4d │ 15.4MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/library/minikube-local-cache-test │ functional-232588  │ sha256:139b28 │ 992B   │
│ localhost/my-image                          │ functional-232588  │ sha256:edb978 │ 831kB  │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-rc.1       │ sha256:3c6ba2 │ 24.7MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ docker.io/kicbase/echo-server               │ functional-232588  │ sha256:ce2d2c │ 2.17MB │
│ registry.k8s.io/coredns/coredns             │ v1.13.1            │ sha256:e08f4d │ 21.2MB │
│ registry.k8s.io/etcd                        │ 3.6.6-0            │ sha256:271e49 │ 21.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-rc.1       │ sha256:7e3ace │ 22.4MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-232588 image ls --format table --alsologtostderr:
I1217 11:01:07.985235 2993995 out.go:360] Setting OutFile to fd 1 ...
I1217 11:01:07.985400 2993995 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:01:07.985437 2993995 out.go:374] Setting ErrFile to fd 2...
I1217 11:01:07.985461 2993995 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:01:07.986274 2993995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
I1217 11:01:07.986983 2993995 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1217 11:01:07.987152 2993995 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1217 11:01:07.987706 2993995 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
I1217 11:01:08.016365 2993995 ssh_runner.go:195] Run: systemctl --version
I1217 11:01:08.016452 2993995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
I1217 11:01:08.037619 2993995 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
I1217 11:01:08.135316 2993995 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-232588 image ls --format json --alsologtostderr:
[{"id":"sha256:edb978592cc0fce3202df5ee58e082dd4c5400df5379a353440662a2925962e9","repoDigests":[],"repoTags":["localhost/my-image:functional-232588"],"size":"830618"},{"id":"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"21168808"},{"id":"sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54","repoDigests":["registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-rc.1"],"size":"24692223"},{"id":"sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde","repoDigests":["registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-rc.1"],"size":"15405535"},{"id":"sha256:3
d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":["registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"21749640"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae457
8e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:139b28e7c45f6120a651876f7db60c8dc8c2da89658d2cb729b8871bf45e8e9c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-232588"],"size":"992"},{"id":"sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e","repoDigests":["registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-rc.1"],"size":"22432301"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-232588"],"size":"2173567"},{"id":"sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256
:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"],"size":"20672157"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-232588 image ls --format json --alsologtostderr:
I1217 11:01:07.772569 2993957 out.go:360] Setting OutFile to fd 1 ...
I1217 11:01:07.772761 2993957 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:01:07.772792 2993957 out.go:374] Setting ErrFile to fd 2...
I1217 11:01:07.772815 2993957 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:01:07.773071 2993957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
I1217 11:01:07.773717 2993957 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1217 11:01:07.773900 2993957 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1217 11:01:07.774459 2993957 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
I1217 11:01:07.792000 2993957 ssh_runner.go:195] Run: systemctl --version
I1217 11:01:07.792062 2993957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
I1217 11:01:07.810547 2993957 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
I1217 11:01:07.902844 2993957 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-232588 image ls --format yaml --alsologtostderr:
- id: sha256:139b28e7c45f6120a651876f7db60c8dc8c2da89658d2cb729b8871bf45e8e9c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-232588
size: "992"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
size: "20672157"
- id: sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-rc.1
size: "22432301"
- id: sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-rc.1
size: "15405535"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "21168808"
- id: sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests:
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "21749640"
- id: sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-rc.1
size: "24692223"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-232588
size: "2173567"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-232588 image ls --format yaml --alsologtostderr:
I1217 11:01:04.033708 2993636 out.go:360] Setting OutFile to fd 1 ...
I1217 11:01:04.033823 2993636 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:01:04.033836 2993636 out.go:374] Setting ErrFile to fd 2...
I1217 11:01:04.033842 2993636 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:01:04.034166 2993636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
I1217 11:01:04.034808 2993636 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1217 11:01:04.034935 2993636 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1217 11:01:04.035481 2993636 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
I1217 11:01:04.053612 2993636 ssh_runner.go:195] Run: systemctl --version
I1217 11:01:04.053675 2993636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
I1217 11:01:04.072165 2993636 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
I1217 11:01:04.167121 2993636 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (3.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232588 ssh pgrep buildkitd: exit status 1 (294.682246ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 image build -t localhost/my-image:functional-232588 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-232588 image build -t localhost/my-image:functional-232588 testdata/build --alsologtostderr: (2.998908538s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-232588 image build -t localhost/my-image:functional-232588 testdata/build --alsologtostderr:
I1217 11:01:04.549408 2993743 out.go:360] Setting OutFile to fd 1 ...
I1217 11:01:04.549533 2993743 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:01:04.549552 2993743 out.go:374] Setting ErrFile to fd 2...
I1217 11:01:04.549557 2993743 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:01:04.549798 2993743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
I1217 11:01:04.550442 2993743 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1217 11:01:04.551148 2993743 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1217 11:01:04.551808 2993743 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
I1217 11:01:04.570162 2993743 ssh_runner.go:195] Run: systemctl --version
I1217 11:01:04.570221 2993743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
I1217 11:01:04.587432 2993743 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
I1217 11:01:04.679165 2993743 build_images.go:162] Building image from path: /tmp/build.1974491096.tar
I1217 11:01:04.679240 2993743 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 11:01:04.687725 2993743 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1974491096.tar
I1217 11:01:04.691784 2993743 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1974491096.tar: stat -c "%s %y" /var/lib/minikube/build/build.1974491096.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1974491096.tar': No such file or directory
I1217 11:01:04.691811 2993743 ssh_runner.go:362] scp /tmp/build.1974491096.tar --> /var/lib/minikube/build/build.1974491096.tar (3072 bytes)
I1217 11:01:04.710445 2993743 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1974491096
I1217 11:01:04.718862 2993743 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1974491096 -xf /var/lib/minikube/build/build.1974491096.tar
I1217 11:01:04.727113 2993743 containerd.go:394] Building image: /var/lib/minikube/build/build.1974491096
I1217 11:01:04.727182 2993743 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1974491096 --local dockerfile=/var/lib/minikube/build/build.1974491096 --output type=image,name=localhost/my-image:functional-232588
#1 [internal] load build definition from Dockerfile
#1 DONE 0.0s

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:1a61222f6606471f663882a0fffb6fb92c0cba527b98b8414dcaaf3610537479 0.0s done
#8 exporting config sha256:edb978592cc0fce3202df5ee58e082dd4c5400df5379a353440662a2925962e9 0.0s done
#8 naming to localhost/my-image:functional-232588 done
#8 DONE 0.2s
I1217 11:01:07.466045 2993743 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1974491096 --local dockerfile=/var/lib/minikube/build/build.1974491096 --output type=image,name=localhost/my-image:functional-232588: (2.738834709s)
I1217 11:01:07.466146 2993743 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1974491096
I1217 11:01:07.476225 2993743 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1974491096.tar
I1217 11:01:07.486465 2993743 build_images.go:218] Built localhost/my-image:functional-232588 from /tmp/build.1974491096.tar
I1217 11:01:07.486511 2993743 build_images.go:134] succeeded building to: functional-232588
I1217 11:01:07.486517 2993743 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (3.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-232588
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 image load --daemon kicbase/echo-server:functional-232588 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 image load --daemon kicbase/echo-server:functional-232588 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (1.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-232588
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 image load --daemon kicbase/echo-server:functional-232588 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 image save kicbase/echo-server:functional-232588 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 image rm kicbase/echo-server:functional-232588 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-232588
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 image save --daemon kicbase/echo-server:functional-232588 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-232588
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-232588 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-232588
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-232588
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-232588
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (146.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1217 11:03:36.153522 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:03:36.161751 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:03:36.173104 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:03:36.194511 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:03:36.235874 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:03:36.317366 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:03:36.478857 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:03:36.800289 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:03:37.441851 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:03:38.723885 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:03:41.286316 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:03:46.408302 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:03:56.649885 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:04:17.132921 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:04:28.205767 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:04:58.094592 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-012672 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m25.194667367s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (146.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-012672 kubectl -- rollout status deployment/busybox: (4.644541009s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 kubectl -- exec busybox-7b57f96db7-kpmkk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 kubectl -- exec busybox-7b57f96db7-v78cb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 kubectl -- exec busybox-7b57f96db7-zdtf6 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 kubectl -- exec busybox-7b57f96db7-kpmkk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 kubectl -- exec busybox-7b57f96db7-v78cb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 kubectl -- exec busybox-7b57f96db7-zdtf6 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 kubectl -- exec busybox-7b57f96db7-kpmkk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 kubectl -- exec busybox-7b57f96db7-v78cb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 kubectl -- exec busybox-7b57f96db7-zdtf6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 kubectl -- exec busybox-7b57f96db7-kpmkk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 kubectl -- exec busybox-7b57f96db7-kpmkk -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 kubectl -- exec busybox-7b57f96db7-v78cb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 kubectl -- exec busybox-7b57f96db7-v78cb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 kubectl -- exec busybox-7b57f96db7-zdtf6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 kubectl -- exec busybox-7b57f96db7-zdtf6 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (31.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 node add --alsologtostderr -v 5
E1217 11:05:43.083397 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-012672 node add --alsologtostderr -v 5: (29.997276698s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-012672 status --alsologtostderr -v 5: (1.084230548s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (31.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-012672 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.067282395s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-012672 status --output json --alsologtostderr -v 5: (1.079532579s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 cp testdata/cp-test.txt ha-012672:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 cp ha-012672:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1355785269/001/cp-test_ha-012672.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 cp ha-012672:/home/docker/cp-test.txt ha-012672-m02:/home/docker/cp-test_ha-012672_ha-012672-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672-m02 "sudo cat /home/docker/cp-test_ha-012672_ha-012672-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 cp ha-012672:/home/docker/cp-test.txt ha-012672-m03:/home/docker/cp-test_ha-012672_ha-012672-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672-m03 "sudo cat /home/docker/cp-test_ha-012672_ha-012672-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 cp ha-012672:/home/docker/cp-test.txt ha-012672-m04:/home/docker/cp-test_ha-012672_ha-012672-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672-m04 "sudo cat /home/docker/cp-test_ha-012672_ha-012672-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 cp testdata/cp-test.txt ha-012672-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 cp ha-012672-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1355785269/001/cp-test_ha-012672-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 cp ha-012672-m02:/home/docker/cp-test.txt ha-012672:/home/docker/cp-test_ha-012672-m02_ha-012672.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672 "sudo cat /home/docker/cp-test_ha-012672-m02_ha-012672.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 cp ha-012672-m02:/home/docker/cp-test.txt ha-012672-m03:/home/docker/cp-test_ha-012672-m02_ha-012672-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672-m03 "sudo cat /home/docker/cp-test_ha-012672-m02_ha-012672-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 cp ha-012672-m02:/home/docker/cp-test.txt ha-012672-m04:/home/docker/cp-test_ha-012672-m02_ha-012672-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672-m04 "sudo cat /home/docker/cp-test_ha-012672-m02_ha-012672-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 cp testdata/cp-test.txt ha-012672-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 cp ha-012672-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1355785269/001/cp-test_ha-012672-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 cp ha-012672-m03:/home/docker/cp-test.txt ha-012672:/home/docker/cp-test_ha-012672-m03_ha-012672.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672 "sudo cat /home/docker/cp-test_ha-012672-m03_ha-012672.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 cp ha-012672-m03:/home/docker/cp-test.txt ha-012672-m02:/home/docker/cp-test_ha-012672-m03_ha-012672-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672-m02 "sudo cat /home/docker/cp-test_ha-012672-m03_ha-012672-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 cp ha-012672-m03:/home/docker/cp-test.txt ha-012672-m04:/home/docker/cp-test_ha-012672-m03_ha-012672-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672-m04 "sudo cat /home/docker/cp-test_ha-012672-m03_ha-012672-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 cp testdata/cp-test.txt ha-012672-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 cp ha-012672-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1355785269/001/cp-test_ha-012672-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 cp ha-012672-m04:/home/docker/cp-test.txt ha-012672:/home/docker/cp-test_ha-012672-m04_ha-012672.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672 "sudo cat /home/docker/cp-test_ha-012672-m04_ha-012672.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 cp ha-012672-m04:/home/docker/cp-test.txt ha-012672-m02:/home/docker/cp-test_ha-012672-m04_ha-012672-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672-m02 "sudo cat /home/docker/cp-test_ha-012672-m04_ha-012672-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 cp ha-012672-m04:/home/docker/cp-test.txt ha-012672-m03:/home/docker/cp-test_ha-012672-m04_ha-012672-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 ssh -n ha-012672-m03 "sudo cat /home/docker/cp-test_ha-012672-m04_ha-012672-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 node stop m02 --alsologtostderr -v 5
E1217 11:06:20.017404 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-012672 node stop m02 --alsologtostderr -v 5: (12.171215994s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-012672 status --alsologtostderr -v 5: exit status 7 (789.909828ms)

                                                
                                                
-- stdout --
	ha-012672
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-012672-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-012672-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-012672-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:06:20.621300 3011333 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:06:20.621695 3011333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:06:20.621709 3011333 out.go:374] Setting ErrFile to fd 2...
	I1217 11:06:20.621716 3011333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:06:20.622251 3011333 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 11:06:20.622481 3011333 out.go:368] Setting JSON to false
	I1217 11:06:20.622516 3011333 mustload.go:66] Loading cluster: ha-012672
	I1217 11:06:20.622643 3011333 notify.go:221] Checking for updates...
	I1217 11:06:20.623053 3011333 config.go:182] Loaded profile config "ha-012672": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1217 11:06:20.623081 3011333 status.go:174] checking status of ha-012672 ...
	I1217 11:06:20.623755 3011333 cli_runner.go:164] Run: docker container inspect ha-012672 --format={{.State.Status}}
	I1217 11:06:20.644911 3011333 status.go:371] ha-012672 host status = "Running" (err=<nil>)
	I1217 11:06:20.644946 3011333 host.go:66] Checking if "ha-012672" exists ...
	I1217 11:06:20.645278 3011333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-012672
	I1217 11:06:20.668565 3011333 host.go:66] Checking if "ha-012672" exists ...
	I1217 11:06:20.668883 3011333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:06:20.668941 3011333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-012672
	I1217 11:06:20.700992 3011333 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35738 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/ha-012672/id_ed25519 Username:docker}
	I1217 11:06:20.796395 3011333 ssh_runner.go:195] Run: systemctl --version
	I1217 11:06:20.803829 3011333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:06:20.820716 3011333 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:06:20.905450 3011333 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-17 11:06:20.894754161 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:06:20.906155 3011333 kubeconfig.go:125] found "ha-012672" server: "https://192.168.49.254:8443"
	I1217 11:06:20.906193 3011333 api_server.go:166] Checking apiserver status ...
	I1217 11:06:20.906252 3011333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:06:20.921592 3011333 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1421/cgroup
	I1217 11:06:20.930833 3011333 api_server.go:182] apiserver freezer: "4:freezer:/docker/916c59f4900198342f6e55521a8587b0b3882e5da8eccdd557565e21e307bc36/kubepods/burstable/poda94a77656b6e270b26212ec48d555f51/615aa5f28d7a28ea0b8777c35f0380c6d9799d73d11e643747290b18038f5949"
	I1217 11:06:20.930899 3011333 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/916c59f4900198342f6e55521a8587b0b3882e5da8eccdd557565e21e307bc36/kubepods/burstable/poda94a77656b6e270b26212ec48d555f51/615aa5f28d7a28ea0b8777c35f0380c6d9799d73d11e643747290b18038f5949/freezer.state
	I1217 11:06:20.939575 3011333 api_server.go:204] freezer state: "THAWED"
	I1217 11:06:20.939601 3011333 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1217 11:06:20.948150 3011333 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1217 11:06:20.948179 3011333 status.go:463] ha-012672 apiserver status = Running (err=<nil>)
	I1217 11:06:20.948190 3011333 status.go:176] ha-012672 status: &{Name:ha-012672 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 11:06:20.948206 3011333 status.go:174] checking status of ha-012672-m02 ...
	I1217 11:06:20.948582 3011333 cli_runner.go:164] Run: docker container inspect ha-012672-m02 --format={{.State.Status}}
	I1217 11:06:20.966769 3011333 status.go:371] ha-012672-m02 host status = "Stopped" (err=<nil>)
	I1217 11:06:20.966795 3011333 status.go:384] host is not running, skipping remaining checks
	I1217 11:06:20.966803 3011333 status.go:176] ha-012672-m02 status: &{Name:ha-012672-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 11:06:20.966823 3011333 status.go:174] checking status of ha-012672-m03 ...
	I1217 11:06:20.967152 3011333 cli_runner.go:164] Run: docker container inspect ha-012672-m03 --format={{.State.Status}}
	I1217 11:06:20.984156 3011333 status.go:371] ha-012672-m03 host status = "Running" (err=<nil>)
	I1217 11:06:20.984181 3011333 host.go:66] Checking if "ha-012672-m03" exists ...
	I1217 11:06:20.984548 3011333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-012672-m03
	I1217 11:06:21.007797 3011333 host.go:66] Checking if "ha-012672-m03" exists ...
	I1217 11:06:21.008124 3011333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:06:21.008171 3011333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-012672-m03
	I1217 11:06:21.026031 3011333 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/ha-012672-m03/id_ed25519 Username:docker}
	I1217 11:06:21.119741 3011333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:06:21.136059 3011333 kubeconfig.go:125] found "ha-012672" server: "https://192.168.49.254:8443"
	I1217 11:06:21.136090 3011333 api_server.go:166] Checking apiserver status ...
	I1217 11:06:21.136142 3011333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:06:21.150316 3011333 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup
	I1217 11:06:21.159422 3011333 api_server.go:182] apiserver freezer: "4:freezer:/docker/afa942b90e9bcbe16cc5e50ed30eadff726f18ae7a99180670128c7a12ac26da/kubepods/burstable/pod11275e3120a188c99ee8c1a8841e16c5/5772df1f4a0694d3aec4d6186256f1d575973b34335ddd232df4a38766c98ba1"
	I1217 11:06:21.159505 3011333 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/afa942b90e9bcbe16cc5e50ed30eadff726f18ae7a99180670128c7a12ac26da/kubepods/burstable/pod11275e3120a188c99ee8c1a8841e16c5/5772df1f4a0694d3aec4d6186256f1d575973b34335ddd232df4a38766c98ba1/freezer.state
	I1217 11:06:21.171101 3011333 api_server.go:204] freezer state: "THAWED"
	I1217 11:06:21.171130 3011333 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1217 11:06:21.179338 3011333 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1217 11:06:21.179430 3011333 status.go:463] ha-012672-m03 apiserver status = Running (err=<nil>)
	I1217 11:06:21.179455 3011333 status.go:176] ha-012672-m03 status: &{Name:ha-012672-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 11:06:21.179505 3011333 status.go:174] checking status of ha-012672-m04 ...
	I1217 11:06:21.179841 3011333 cli_runner.go:164] Run: docker container inspect ha-012672-m04 --format={{.State.Status}}
	I1217 11:06:21.198474 3011333 status.go:371] ha-012672-m04 host status = "Running" (err=<nil>)
	I1217 11:06:21.198498 3011333 host.go:66] Checking if "ha-012672-m04" exists ...
	I1217 11:06:21.198804 3011333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-012672-m04
	I1217 11:06:21.216325 3011333 host.go:66] Checking if "ha-012672-m04" exists ...
	I1217 11:06:21.216796 3011333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:06:21.216844 3011333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-012672-m04
	I1217 11:06:21.233359 3011333 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35753 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/ha-012672-m04/id_ed25519 Username:docker}
	I1217 11:06:21.325891 3011333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:06:21.338927 3011333 status.go:176] ha-012672-m04 status: &{Name:ha-012672-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (13.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-012672 node start m02 --alsologtostderr -v 5: (12.204063569s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-012672 status --alsologtostderr -v 5: (1.323084798s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (13.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-012672 stop --alsologtostderr -v 5: (37.734088769s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-012672 start --wait true --alsologtostderr -v 5: (1m0.484980901s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-012672 node delete m03 --alsologtostderr -v 5: (10.158299202s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 stop --alsologtostderr -v 5
E1217 11:08:36.153508 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:08:46.153724 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-012672 stop --alsologtostderr -v 5: (36.325827475s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-012672 status --alsologtostderr -v 5: exit status 7 (127.596426ms)

                                                
                                                
-- stdout --
	ha-012672
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-012672-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-012672-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:09:03.610431 3026046 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:09:03.610630 3026046 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:09:03.610658 3026046 out.go:374] Setting ErrFile to fd 2...
	I1217 11:09:03.610679 3026046 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:09:03.610982 3026046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 11:09:03.611229 3026046 out.go:368] Setting JSON to false
	I1217 11:09:03.611285 3026046 mustload.go:66] Loading cluster: ha-012672
	I1217 11:09:03.611373 3026046 notify.go:221] Checking for updates...
	I1217 11:09:03.611787 3026046 config.go:182] Loaded profile config "ha-012672": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1217 11:09:03.611838 3026046 status.go:174] checking status of ha-012672 ...
	I1217 11:09:03.612797 3026046 cli_runner.go:164] Run: docker container inspect ha-012672 --format={{.State.Status}}
	I1217 11:09:03.631322 3026046 status.go:371] ha-012672 host status = "Stopped" (err=<nil>)
	I1217 11:09:03.631343 3026046 status.go:384] host is not running, skipping remaining checks
	I1217 11:09:03.631350 3026046 status.go:176] ha-012672 status: &{Name:ha-012672 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 11:09:03.631381 3026046 status.go:174] checking status of ha-012672-m02 ...
	I1217 11:09:03.631689 3026046 cli_runner.go:164] Run: docker container inspect ha-012672-m02 --format={{.State.Status}}
	I1217 11:09:03.666096 3026046 status.go:371] ha-012672-m02 host status = "Stopped" (err=<nil>)
	I1217 11:09:03.666122 3026046 status.go:384] host is not running, skipping remaining checks
	I1217 11:09:03.666130 3026046 status.go:176] ha-012672-m02 status: &{Name:ha-012672-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 11:09:03.666163 3026046 status.go:174] checking status of ha-012672-m04 ...
	I1217 11:09:03.666488 3026046 cli_runner.go:164] Run: docker container inspect ha-012672-m04 --format={{.State.Status}}
	I1217 11:09:03.687325 3026046 status.go:371] ha-012672-m04 host status = "Stopped" (err=<nil>)
	I1217 11:09:03.687346 3026046 status.go:384] host is not running, skipping remaining checks
	I1217 11:09:03.687360 3026046 status.go:176] ha-012672-m04 status: &{Name:ha-012672-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (59.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1217 11:09:03.858834 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:09:28.205396 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-012672 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (58.967671464s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (59.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (95.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 node add --control-plane --alsologtostderr -v 5
E1217 11:10:43.083547 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-012672 node add --control-plane --alsologtostderr -v 5: (1m34.076372334s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-012672 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-012672 status --alsologtostderr -v 5: (1.092172331s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (95.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.106444098s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                    
x
+
TestJSONOutput/start/Command (49.2s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-430571 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-430571 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (49.193229311s)
--- PASS: TestJSONOutput/start/Command (49.20s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-430571 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-430571 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.02s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-430571 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-430571 --output=json --user=testUser: (6.018521922s)
--- PASS: TestJSONOutput/stop/Command (6.02s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-209831 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-209831 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (99.127531ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4a017e2c-d332-4795-b4ce-d7caea11d0c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-209831] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"eec3d8ae-ce46-4536-9e71-844e924619ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22182"}}
	{"specversion":"1.0","id":"b809c29a-2d02-453d-9b42-f2f401f4638d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"454df794-d5db-4ca8-9669-3bff57b025b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig"}}
	{"specversion":"1.0","id":"6f228835-ea38-4996-a852-0e41b15e5e91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube"}}
	{"specversion":"1.0","id":"0fb0db6b-33b8-4ff1-b4ff-c0d86137a5c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e0c63497-50c3-46aa-9dbf-45f6a86282c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ce00e17f-108b-44b9-8b65-818e710981ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-209831" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-209831
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.19s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-008075 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-008075 --network=: (34.882478416s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-008075" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-008075
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-008075: (2.274495967s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.19s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.83s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-978634 --network=bridge
E1217 11:13:36.152889 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-978634 --network=bridge: (35.667361541s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-978634" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-978634
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-978634: (2.139027274s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.83s)

                                                
                                    
x
+
TestKicExistingNetwork (32.94s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1217 11:14:06.561167 2924574 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1217 11:14:06.577740 2924574 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1217 11:14:06.577818 2924574 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1217 11:14:06.577836 2924574 cli_runner.go:164] Run: docker network inspect existing-network
W1217 11:14:06.594182 2924574 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1217 11:14:06.594216 2924574 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1217 11:14:06.594230 2924574 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1217 11:14:06.594333 2924574 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1217 11:14:06.611543 2924574 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f429477a79c4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:ea:a9:f2:52:01} reservation:<nil>}
I1217 11:14:06.611870 2924574 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001aa3860}
I1217 11:14:06.611891 2924574 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1217 11:14:06.611943 2924574 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1217 11:14:06.669732 2924574 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-756864 --network=existing-network
E1217 11:14:28.205508 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-756864 --network=existing-network: (30.628506315s)
helpers_test.go:176: Cleaning up "existing-network-756864" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-756864
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-756864: (2.171500426s)
I1217 11:14:39.486056 2924574 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.94s)

                                                
                                    
x
+
TestKicCustomSubnet (36.45s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-981725 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-981725 --subnet=192.168.60.0/24: (34.258153415s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-981725 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-981725" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-981725
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-981725: (2.162948284s)
--- PASS: TestKicCustomSubnet (36.45s)

                                                
                                    
x
+
TestKicStaticIP (36.27s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-645039 --static-ip=192.168.200.200
E1217 11:15:43.084787 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-645039 --static-ip=192.168.200.200: (33.841361128s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-645039 ip
helpers_test.go:176: Cleaning up "static-ip-645039" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-645039
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-645039: (2.2670671s)
--- PASS: TestKicStaticIP (36.27s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (70.35s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-198592 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-198592 --driver=docker  --container-runtime=containerd: (31.940781147s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-201201 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-201201 --driver=docker  --container-runtime=containerd: (32.181943452s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-198592
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-201201
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-201201" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-201201
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-201201: (2.086134513s)
helpers_test.go:176: Cleaning up "first-198592" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-198592
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-198592: (2.64477508s)
--- PASS: TestMinikubeProfile (70.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.36s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-725700 --memory=3072 --mount-string /tmp/TestMountStartserial931834054/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-725700 --memory=3072 --mount-string /tmp/TestMountStartserial931834054/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.361526151s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-725700 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-727882 --memory=3072 --mount-string /tmp/TestMountStartserial931834054/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-727882 --memory=3072 --mount-string /tmp/TestMountStartserial931834054/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.961668826s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-727882 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-725700 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-725700 --alsologtostderr -v=5: (1.722325835s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-727882 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-727882
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-727882: (1.289189508s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.44s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-727882
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-727882: (6.442765865s)
--- PASS: TestMountStart/serial/RestartStopped (7.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-727882 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (80.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-488082 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1217 11:18:36.152727 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-488082 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m20.003272545s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (80.52s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-488082 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-488082 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-488082 -- rollout status deployment/busybox: (3.660236367s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-488082 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-488082 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-488082 -- exec busybox-7b57f96db7-2gmcn -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-488082 -- exec busybox-7b57f96db7-c75js -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-488082 -- exec busybox-7b57f96db7-2gmcn -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-488082 -- exec busybox-7b57f96db7-c75js -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-488082 -- exec busybox-7b57f96db7-2gmcn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-488082 -- exec busybox-7b57f96db7-c75js -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.51s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-488082 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-488082 -- exec busybox-7b57f96db7-2gmcn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-488082 -- exec busybox-7b57f96db7-2gmcn -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-488082 -- exec busybox-7b57f96db7-c75js -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-488082 -- exec busybox-7b57f96db7-c75js -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (29.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-488082 -v=5 --alsologtostderr
E1217 11:19:11.280080 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:19:28.205249 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-488082 -v=5 --alsologtostderr: (28.858590203s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (29.53s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-488082 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 cp testdata/cp-test.txt multinode-488082:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 ssh -n multinode-488082 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 cp multinode-488082:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile829198312/001/cp-test_multinode-488082.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 ssh -n multinode-488082 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 cp multinode-488082:/home/docker/cp-test.txt multinode-488082-m02:/home/docker/cp-test_multinode-488082_multinode-488082-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 ssh -n multinode-488082 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 ssh -n multinode-488082-m02 "sudo cat /home/docker/cp-test_multinode-488082_multinode-488082-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 cp multinode-488082:/home/docker/cp-test.txt multinode-488082-m03:/home/docker/cp-test_multinode-488082_multinode-488082-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 ssh -n multinode-488082 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 ssh -n multinode-488082-m03 "sudo cat /home/docker/cp-test_multinode-488082_multinode-488082-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 cp testdata/cp-test.txt multinode-488082-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 ssh -n multinode-488082-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 cp multinode-488082-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile829198312/001/cp-test_multinode-488082-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 ssh -n multinode-488082-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 cp multinode-488082-m02:/home/docker/cp-test.txt multinode-488082:/home/docker/cp-test_multinode-488082-m02_multinode-488082.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 ssh -n multinode-488082-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 ssh -n multinode-488082 "sudo cat /home/docker/cp-test_multinode-488082-m02_multinode-488082.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 cp multinode-488082-m02:/home/docker/cp-test.txt multinode-488082-m03:/home/docker/cp-test_multinode-488082-m02_multinode-488082-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 ssh -n multinode-488082-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 ssh -n multinode-488082-m03 "sudo cat /home/docker/cp-test_multinode-488082-m02_multinode-488082-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 cp testdata/cp-test.txt multinode-488082-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 ssh -n multinode-488082-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 cp multinode-488082-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile829198312/001/cp-test_multinode-488082-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 ssh -n multinode-488082-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 cp multinode-488082-m03:/home/docker/cp-test.txt multinode-488082:/home/docker/cp-test_multinode-488082-m03_multinode-488082.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 ssh -n multinode-488082-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 ssh -n multinode-488082 "sudo cat /home/docker/cp-test_multinode-488082-m03_multinode-488082.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 cp multinode-488082-m03:/home/docker/cp-test.txt multinode-488082-m02:/home/docker/cp-test_multinode-488082-m03_multinode-488082-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 ssh -n multinode-488082-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 ssh -n multinode-488082-m02 "sudo cat /home/docker/cp-test_multinode-488082-m03_multinode-488082-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-488082 node stop m03: (1.321750017s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-488082 status: exit status 7 (537.305443ms)

                                                
                                                
-- stdout --
	multinode-488082
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-488082-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-488082-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-488082 status --alsologtostderr: exit status 7 (596.256117ms)

                                                
                                                
-- stdout --
	multinode-488082
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-488082-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-488082-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:19:41.962902 3079514 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:19:41.963089 3079514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:19:41.963119 3079514 out.go:374] Setting ErrFile to fd 2...
	I1217 11:19:41.963144 3079514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:19:41.963432 3079514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 11:19:41.963660 3079514 out.go:368] Setting JSON to false
	I1217 11:19:41.963717 3079514 mustload.go:66] Loading cluster: multinode-488082
	I1217 11:19:41.963806 3079514 notify.go:221] Checking for updates...
	I1217 11:19:41.964160 3079514 config.go:182] Loaded profile config "multinode-488082": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1217 11:19:41.964204 3079514 status.go:174] checking status of multinode-488082 ...
	I1217 11:19:41.965081 3079514 cli_runner.go:164] Run: docker container inspect multinode-488082 --format={{.State.Status}}
	I1217 11:19:41.984492 3079514 status.go:371] multinode-488082 host status = "Running" (err=<nil>)
	I1217 11:19:41.984514 3079514 host.go:66] Checking if "multinode-488082" exists ...
	I1217 11:19:41.985015 3079514 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-488082
	I1217 11:19:42.021728 3079514 host.go:66] Checking if "multinode-488082" exists ...
	I1217 11:19:42.022063 3079514 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:19:42.022126 3079514 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-488082
	I1217 11:19:42.042496 3079514 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35861 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/multinode-488082/id_ed25519 Username:docker}
	I1217 11:19:42.162875 3079514 ssh_runner.go:195] Run: systemctl --version
	I1217 11:19:42.171129 3079514 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:19:42.188532 3079514 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:19:42.263533 3079514 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-17 11:19:42.251376127 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:19:42.264122 3079514 kubeconfig.go:125] found "multinode-488082" server: "https://192.168.67.2:8443"
	I1217 11:19:42.264176 3079514 api_server.go:166] Checking apiserver status ...
	I1217 11:19:42.264230 3079514 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:19:42.278943 3079514 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1380/cgroup
	I1217 11:19:42.289091 3079514 api_server.go:182] apiserver freezer: "4:freezer:/docker/f7f494691d0d6dfbd6dca5d49937691cccf8bf182022200c8db270f282d81474/kubepods/burstable/pod73d696da651af7584c3aa06b7045f804/c6efb88c996a721ff80945ff28f99bcf7038a025238d2fab8d1bcbe9d739aee7"
	I1217 11:19:42.289180 3079514 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f7f494691d0d6dfbd6dca5d49937691cccf8bf182022200c8db270f282d81474/kubepods/burstable/pod73d696da651af7584c3aa06b7045f804/c6efb88c996a721ff80945ff28f99bcf7038a025238d2fab8d1bcbe9d739aee7/freezer.state
	I1217 11:19:42.298549 3079514 api_server.go:204] freezer state: "THAWED"
	I1217 11:19:42.298586 3079514 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1217 11:19:42.307322 3079514 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1217 11:19:42.307408 3079514 status.go:463] multinode-488082 apiserver status = Running (err=<nil>)
	I1217 11:19:42.307441 3079514 status.go:176] multinode-488082 status: &{Name:multinode-488082 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 11:19:42.307503 3079514 status.go:174] checking status of multinode-488082-m02 ...
	I1217 11:19:42.307895 3079514 cli_runner.go:164] Run: docker container inspect multinode-488082-m02 --format={{.State.Status}}
	I1217 11:19:42.328279 3079514 status.go:371] multinode-488082-m02 host status = "Running" (err=<nil>)
	I1217 11:19:42.328303 3079514 host.go:66] Checking if "multinode-488082-m02" exists ...
	I1217 11:19:42.328715 3079514 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-488082-m02
	I1217 11:19:42.347068 3079514 host.go:66] Checking if "multinode-488082-m02" exists ...
	I1217 11:19:42.347379 3079514 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:19:42.347430 3079514 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-488082-m02
	I1217 11:19:42.372178 3079514 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35868 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/multinode-488082-m02/id_ed25519 Username:docker}
	I1217 11:19:42.465871 3079514 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:19:42.478883 3079514 status.go:176] multinode-488082-m02 status: &{Name:multinode-488082-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1217 11:19:42.478918 3079514 status.go:174] checking status of multinode-488082-m03 ...
	I1217 11:19:42.479220 3079514 cli_runner.go:164] Run: docker container inspect multinode-488082-m03 --format={{.State.Status}}
	I1217 11:19:42.497348 3079514 status.go:371] multinode-488082-m03 host status = "Stopped" (err=<nil>)
	I1217 11:19:42.497372 3079514 status.go:384] host is not running, skipping remaining checks
	I1217 11:19:42.497381 3079514 status.go:176] multinode-488082-m03 status: &{Name:multinode-488082-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-488082 node start m03 -v=5 --alsologtostderr: (6.936786314s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (72.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-488082
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-488082
E1217 11:19:59.220261 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-488082: (25.206501513s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-488082 --wait=true -v=5 --alsologtostderr
E1217 11:20:43.083414 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-488082 --wait=true -v=5 --alsologtostderr: (47.071516204s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-488082
--- PASS: TestMultiNode/serial/RestartKeepsNodes (72.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-488082 node delete m03: (4.977593591s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-488082 stop: (24.010367134s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-488082 status: exit status 7 (107.62316ms)

                                                
                                                
-- stdout --
	multinode-488082
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-488082-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-488082 status --alsologtostderr: exit status 7 (104.31759ms)

                                                
                                                
-- stdout --
	multinode-488082
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-488082-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:21:32.479032 3088328 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:21:32.479155 3088328 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:21:32.479167 3088328 out.go:374] Setting ErrFile to fd 2...
	I1217 11:21:32.479174 3088328 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:21:32.479523 3088328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 11:21:32.479753 3088328 out.go:368] Setting JSON to false
	I1217 11:21:32.479777 3088328 mustload.go:66] Loading cluster: multinode-488082
	I1217 11:21:32.480469 3088328 config.go:182] Loaded profile config "multinode-488082": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1217 11:21:32.480496 3088328 status.go:174] checking status of multinode-488082 ...
	I1217 11:21:32.481269 3088328 cli_runner.go:164] Run: docker container inspect multinode-488082 --format={{.State.Status}}
	I1217 11:21:32.481671 3088328 notify.go:221] Checking for updates...
	I1217 11:21:32.500704 3088328 status.go:371] multinode-488082 host status = "Stopped" (err=<nil>)
	I1217 11:21:32.500747 3088328 status.go:384] host is not running, skipping remaining checks
	I1217 11:21:32.500754 3088328 status.go:176] multinode-488082 status: &{Name:multinode-488082 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 11:21:32.500783 3088328 status.go:174] checking status of multinode-488082-m02 ...
	I1217 11:21:32.501122 3088328 cli_runner.go:164] Run: docker container inspect multinode-488082-m02 --format={{.State.Status}}
	I1217 11:21:32.530605 3088328 status.go:371] multinode-488082-m02 host status = "Stopped" (err=<nil>)
	I1217 11:21:32.530631 3088328 status.go:384] host is not running, skipping remaining checks
	I1217 11:21:32.530637 3088328 status.go:176] multinode-488082-m02 status: &{Name:multinode-488082-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-488082 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-488082 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (49.198730008s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-488082 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.90s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-488082
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-488082-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-488082-m02 --driver=docker  --container-runtime=containerd: exit status 14 (95.76531ms)

                                                
                                                
-- stdout --
	* [multinode-488082-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-488082-m02' is duplicated with machine name 'multinode-488082-m02' in profile 'multinode-488082'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-488082-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-488082-m03 --driver=docker  --container-runtime=containerd: (35.199799251s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-488082
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-488082: exit status 80 (377.966157ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-488082 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-488082-m03 already exists in multinode-488082-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-488082-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-488082-m03: (2.209087028s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.94s)

                                                
                                    
x
+
TestPreload (124.44s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-549762 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
E1217 11:23:36.152737 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-549762 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (59.993264432s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-549762 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-549762 image pull gcr.io/k8s-minikube/busybox: (2.280547766s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-549762
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-549762: (5.898717089s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-549762 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1217 11:24:28.205423 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-549762 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (53.531470082s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-549762 image list
helpers_test.go:176: Cleaning up "test-preload-549762" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-549762
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-549762: (2.490701615s)
--- PASS: TestPreload (124.44s)

                                                
                                    
x
+
TestScheduledStopUnix (106.45s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-726997 --memory=3072 --driver=docker  --container-runtime=containerd
E1217 11:25:26.155780 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-726997 --memory=3072 --driver=docker  --container-runtime=containerd: (30.759323092s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-726997 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 11:25:39.927457 3104250 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:25:39.927618 3104250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:25:39.927652 3104250 out.go:374] Setting ErrFile to fd 2...
	I1217 11:25:39.927692 3104250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:25:39.928232 3104250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 11:25:39.928820 3104250 out.go:368] Setting JSON to false
	I1217 11:25:39.928944 3104250 mustload.go:66] Loading cluster: scheduled-stop-726997
	I1217 11:25:39.929315 3104250 config.go:182] Loaded profile config "scheduled-stop-726997": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1217 11:25:39.929398 3104250 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/scheduled-stop-726997/config.json ...
	I1217 11:25:39.929580 3104250 mustload.go:66] Loading cluster: scheduled-stop-726997
	I1217 11:25:39.929704 3104250 config.go:182] Loaded profile config "scheduled-stop-726997": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-726997 -n scheduled-stop-726997
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-726997 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 11:25:40.379032 3104338 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:25:40.379196 3104338 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:25:40.379226 3104338 out.go:374] Setting ErrFile to fd 2...
	I1217 11:25:40.379248 3104338 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:25:40.379529 3104338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 11:25:40.379818 3104338 out.go:368] Setting JSON to false
	I1217 11:25:40.380062 3104338 daemonize_unix.go:73] killing process 3104268 as it is an old scheduled stop
	I1217 11:25:40.380252 3104338 mustload.go:66] Loading cluster: scheduled-stop-726997
	I1217 11:25:40.380698 3104338 config.go:182] Loaded profile config "scheduled-stop-726997": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1217 11:25:40.380821 3104338 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/scheduled-stop-726997/config.json ...
	I1217 11:25:40.381026 3104338 mustload.go:66] Loading cluster: scheduled-stop-726997
	I1217 11:25:40.381172 3104338 config.go:182] Loaded profile config "scheduled-stop-726997": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1217 11:25:40.390229 2924574 retry.go:31] will retry after 96.048µs: open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/scheduled-stop-726997/pid: no such file or directory
I1217 11:25:40.390868 2924574 retry.go:31] will retry after 180.573µs: open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/scheduled-stop-726997/pid: no such file or directory
I1217 11:25:40.392038 2924574 retry.go:31] will retry after 287.371µs: open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/scheduled-stop-726997/pid: no such file or directory
I1217 11:25:40.393248 2924574 retry.go:31] will retry after 422.553µs: open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/scheduled-stop-726997/pid: no such file or directory
I1217 11:25:40.394369 2924574 retry.go:31] will retry after 657.405µs: open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/scheduled-stop-726997/pid: no such file or directory
I1217 11:25:40.395492 2924574 retry.go:31] will retry after 1.019204ms: open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/scheduled-stop-726997/pid: no such file or directory
I1217 11:25:40.396648 2924574 retry.go:31] will retry after 616.979µs: open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/scheduled-stop-726997/pid: no such file or directory
I1217 11:25:40.397804 2924574 retry.go:31] will retry after 1.371639ms: open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/scheduled-stop-726997/pid: no such file or directory
I1217 11:25:40.399997 2924574 retry.go:31] will retry after 2.522604ms: open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/scheduled-stop-726997/pid: no such file or directory
I1217 11:25:40.403199 2924574 retry.go:31] will retry after 5.616363ms: open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/scheduled-stop-726997/pid: no such file or directory
I1217 11:25:40.409426 2924574 retry.go:31] will retry after 6.931476ms: open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/scheduled-stop-726997/pid: no such file or directory
I1217 11:25:40.416665 2924574 retry.go:31] will retry after 9.263841ms: open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/scheduled-stop-726997/pid: no such file or directory
I1217 11:25:40.426960 2924574 retry.go:31] will retry after 15.909688ms: open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/scheduled-stop-726997/pid: no such file or directory
I1217 11:25:40.443268 2924574 retry.go:31] will retry after 19.796266ms: open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/scheduled-stop-726997/pid: no such file or directory
I1217 11:25:40.464159 2924574 retry.go:31] will retry after 17.168515ms: open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/scheduled-stop-726997/pid: no such file or directory
I1217 11:25:40.482415 2924574 retry.go:31] will retry after 51.372596ms: open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/scheduled-stop-726997/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-726997 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1217 11:25:43.082742 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-726997 -n scheduled-stop-726997
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-726997
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-726997 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 11:26:06.344376 3105024 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:26:06.344601 3105024 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:26:06.344633 3105024 out.go:374] Setting ErrFile to fd 2...
	I1217 11:26:06.344655 3105024 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:26:06.344954 3105024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 11:26:06.345257 3105024 out.go:368] Setting JSON to false
	I1217 11:26:06.345389 3105024 mustload.go:66] Loading cluster: scheduled-stop-726997
	I1217 11:26:06.345793 3105024 config.go:182] Loaded profile config "scheduled-stop-726997": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1217 11:26:06.345900 3105024 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/scheduled-stop-726997/config.json ...
	I1217 11:26:06.346110 3105024 mustload.go:66] Loading cluster: scheduled-stop-726997
	I1217 11:26:06.346263 3105024 config.go:182] Loaded profile config "scheduled-stop-726997": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-726997
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-726997: exit status 7 (70.028314ms)

                                                
                                                
-- stdout --
	scheduled-stop-726997
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-726997 -n scheduled-stop-726997
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-726997 -n scheduled-stop-726997: exit status 7 (69.974998ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-726997" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-726997
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-726997: (4.069189281s)
--- PASS: TestScheduledStopUnix (106.45s)

                                                
                                    
x
+
TestInsufficientStorage (12.28s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-466893 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-466893 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.73060557s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9a7cf78f-11a3-45b3-8daf-ba30d48b6016","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-466893] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8ac6e49d-9e4e-4419-9f84-de3d7efe2144","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22182"}}
	{"specversion":"1.0","id":"5587ffbe-e263-4026-a1cc-b1c7aed82e98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"52161895-eddb-472b-823f-2a8c5cbdf37f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig"}}
	{"specversion":"1.0","id":"2cf0366d-6bd1-434d-a209-8db6975932bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube"}}
	{"specversion":"1.0","id":"8aac7b4b-cd32-4c91-ab47-bce6042e82f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"1601a376-037d-4f76-a58b-02e348ba2981","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"23dcab26-2a94-455c-aa5b-13f99c9c9ad2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3f90bf58-58b2-41fc-835d-fe14e3c89049","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"a2b060b4-e549-47d0-a9c4-ee51a3dec30c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8379fe20-fd4e-40da-8cde-b8a84b7f515c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"9c22c2df-cea9-4a89-a367-ffbadf2318a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-466893\" primary control-plane node in \"insufficient-storage-466893\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"af80be56-c30c-4869-a3c2-7ea5c9ed2240","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765661130-22141 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"5741797a-5898-4059-852d-b642be449cb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"c4b0fcff-ebbe-4987-bda9-e28dcc10ce0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-466893 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-466893 --output=json --layout=cluster: exit status 7 (295.42938ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-466893","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-466893","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 11:27:05.572837 3106848 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-466893" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-466893 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-466893 --output=json --layout=cluster: exit status 7 (293.824109ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-466893","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-466893","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 11:27:05.867260 3106916 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-466893" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig
	E1217 11:27:05.877591 3106916 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/insufficient-storage-466893/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-466893" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-466893
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-466893: (1.955959169s)
--- PASS: TestInsufficientStorage (12.28s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (310.75s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2198541657 start -p running-upgrade-075887 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2198541657 start -p running-upgrade-075887 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (34.765089864s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-075887 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1217 11:35:43.083223 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:35:51.281642 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:36:39.222023 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:38:36.153344 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:39:28.205235 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-075887 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m32.800316669s)
helpers_test.go:176: Cleaning up "running-upgrade-075887" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-075887
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-075887: (2.016439309s)
--- PASS: TestRunningBinaryUpgrade (310.75s)

                                                
                                    
x
+
TestMissingContainerUpgrade (139.92s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.3649097919 start -p missing-upgrade-316966 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.3649097919 start -p missing-upgrade-316966 --memory=3072 --driver=docker  --container-runtime=containerd: (1m3.23199665s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-316966
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-316966
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-316966 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-316966 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m10.184273135s)
helpers_test.go:176: Cleaning up "missing-upgrade-316966" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-316966
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-316966: (4.129252602s)
--- PASS: TestMissingContainerUpgrade (139.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-366220 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-366220 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (96.930667ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-366220] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-366220 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-366220 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (43.713954246s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-366220 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (24.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-366220 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-366220 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (22.123355158s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-366220 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-366220 status -o json: exit status 2 (301.621022ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-366220","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-366220
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-366220: (2.001501245s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (24.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-366220 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-366220 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (6.924253628s)
--- PASS: TestNoKubernetes/serial/Start (6.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-366220 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-366220 "sudo systemctl is-active --quiet service kubelet": exit status 1 (264.884965ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-366220
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-366220: (1.29899858s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-366220 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-366220 --driver=docker  --container-runtime=containerd: (6.543585252s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-366220 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-366220 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.04351ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
E1217 11:29:28.205256 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStoppedBinaryUpgrade/Setup (0.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (299.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.800216315 start -p stopped-upgrade-387157 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.800216315 start -p stopped-upgrade-387157 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (29.363961821s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.800216315 -p stopped-upgrade-387157 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.800216315 -p stopped-upgrade-387157 stop: (1.259794034s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-387157 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1217 11:30:43.083367 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:33:36.153252 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:34:28.205840 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-387157 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m29.101951765s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (299.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-387157
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-387157: (2.290339026s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.29s)

                                                
                                    
x
+
TestPause/serial/Start (49.58s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-769685 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-769685 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (49.584235921s)
--- PASS: TestPause/serial/Start (49.58s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.24s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-769685 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-769685 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.222698354s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.24s)

                                                
                                    
x
+
TestPause/serial/Pause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-769685 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.77s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-769685 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-769685 --output=json --layout=cluster: exit status 2 (325.701927ms)

                                                
                                                
-- stdout --
	{"Name":"pause-769685","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-769685","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-769685 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.62s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.85s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-769685 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.85s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.81s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-769685 --alsologtostderr -v=5
E1217 11:40:43.082860 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-769685 --alsologtostderr -v=5: (2.806142079s)
--- PASS: TestPause/serial/DeletePaused (2.81s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.5s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-769685
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-769685: exit status 1 (20.805834ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-769685: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-348887 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-348887 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (188.907446ms)

                                                
                                                
-- stdout --
	* [false-348887] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22182
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:41:24.657241 3166781 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:41:24.657467 3166781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:41:24.657505 3166781 out.go:374] Setting ErrFile to fd 2...
	I1217 11:41:24.657528 3166781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:41:24.658505 3166781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
	I1217 11:41:24.659027 3166781 out.go:368] Setting JSON to false
	I1217 11:41:24.659956 3166781 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":62635,"bootTime":1765909050,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 11:41:24.660028 3166781 start.go:143] virtualization:  
	I1217 11:41:24.663659 3166781 out.go:179] * [false-348887] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 11:41:24.666653 3166781 out.go:179]   - MINIKUBE_LOCATION=22182
	I1217 11:41:24.666750 3166781 notify.go:221] Checking for updates...
	I1217 11:41:24.672546 3166781 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:41:24.675569 3166781 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
	I1217 11:41:24.678533 3166781 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
	I1217 11:41:24.681392 3166781 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 11:41:24.684372 3166781 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:41:24.687765 3166781 config.go:182] Loaded profile config "kubernetes-upgrade-452067": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1217 11:41:24.687958 3166781 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:41:24.722812 3166781 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 11:41:24.722927 3166781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 11:41:24.780591 3166781 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 11:41:24.771275983 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 11:41:24.780698 3166781 docker.go:319] overlay module found
	I1217 11:41:24.785612 3166781 out.go:179] * Using the docker driver based on user configuration
	I1217 11:41:24.788318 3166781 start.go:309] selected driver: docker
	I1217 11:41:24.788334 3166781 start.go:927] validating driver "docker" against <nil>
	I1217 11:41:24.788348 3166781 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:41:24.792021 3166781 out.go:203] 
	W1217 11:41:24.794858 3166781 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1217 11:41:24.797679 3166781 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-348887 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-348887

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-348887

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-348887

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-348887

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-348887

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-348887

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-348887

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-348887

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-348887

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-348887

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-348887

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-348887" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-348887" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 11:29:29 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-452067
contexts:
- context:
cluster: kubernetes-upgrade-452067
user: kubernetes-upgrade-452067
name: kubernetes-upgrade-452067
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-452067
user:
client-certificate: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kubernetes-upgrade-452067/client.crt
client-key: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kubernetes-upgrade-452067/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-348887

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-348887"

                                                
                                                
----------------------- debugLogs end: false-348887 [took: 3.265512558s] --------------------------------
helpers_test.go:176: Cleaning up "false-348887" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-348887
--- PASS: TestNetworkPlugins/group/false (3.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (59.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-112124 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1217 11:43:36.152646 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-112124 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (59.276729514s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (59.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-112124 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [a255ac2d-b984-43a1-bdcc-551b2f79fffd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [a255ac2d-b984-43a1-bdcc-551b2f79fffd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003617897s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-112124 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-112124 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-112124 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.335929703s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-112124 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-112124 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-112124 --alsologtostderr -v=3: (12.156169854s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-112124 -n old-k8s-version-112124
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-112124 -n old-k8s-version-112124: exit status 7 (71.458903ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-112124 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-112124 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1217 11:44:28.205776 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-112124 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (51.019331749s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-112124 -n old-k8s-version-112124
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-nckph" [c29d78de-b8fc-4a83-b5bd-3800cfbe512c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004273446s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-nckph" [c29d78de-b8fc-4a83-b5bd-3800cfbe512c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003598175s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-112124 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-112124 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-112124 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-112124 -n old-k8s-version-112124
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-112124 -n old-k8s-version-112124: exit status 2 (358.338744ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-112124 -n old-k8s-version-112124
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-112124 -n old-k8s-version-112124: exit status 2 (328.422816ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-112124 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-112124 -n old-k8s-version-112124
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-112124 -n old-k8s-version-112124
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (57.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-628462 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-628462 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3: (57.157948987s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (57.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-628462 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [2991daf8-75cb-440a-bc64-7187544982b6] Pending
helpers_test.go:353: "busybox" [2991daf8-75cb-440a-bc64-7187544982b6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [2991daf8-75cb-440a-bc64-7187544982b6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.002800986s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-628462 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-628462 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-628462 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.033163758s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-628462 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-628462 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-628462 --alsologtostderr -v=3: (12.085942379s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-628462 -n embed-certs-628462
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-628462 -n embed-certs-628462: exit status 7 (69.5155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-628462 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-628462 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-628462 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3: (50.173960305s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-628462 -n embed-certs-628462
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-s8f8z" [8a267704-31e8-4d53-bfcb-6d546c6f817f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.020604013s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-s8f8z" [8a267704-31e8-4d53-bfcb-6d546c6f817f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004348323s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-628462 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-628462 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-628462 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-628462 -n embed-certs-628462
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-628462 -n embed-certs-628462: exit status 2 (314.260083ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-628462 -n embed-certs-628462
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-628462 -n embed-certs-628462: exit status 2 (340.7574ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-628462 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-628462 -n embed-certs-628462
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-628462 -n embed-certs-628462
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-224095 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3
E1217 11:48:36.152564 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:48:50.515594 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/old-k8s-version-112124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:48:50.522399 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/old-k8s-version-112124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:48:50.534082 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/old-k8s-version-112124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:48:50.555509 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/old-k8s-version-112124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:48:50.596921 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/old-k8s-version-112124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:48:50.678337 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/old-k8s-version-112124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:48:50.840179 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/old-k8s-version-112124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:48:51.161722 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/old-k8s-version-112124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:48:51.803897 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/old-k8s-version-112124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:48:53.085662 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/old-k8s-version-112124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:48:55.647692 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/old-k8s-version-112124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:49:00.769147 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/old-k8s-version-112124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-224095 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3: (50.709437362s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-224095 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [3fce9acd-0aec-4978-93de-fa0cf5f4248a] Pending
helpers_test.go:353: "busybox" [3fce9acd-0aec-4978-93de-fa0cf5f4248a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [3fce9acd-0aec-4978-93de-fa0cf5f4248a] Running
E1217 11:49:11.011310 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/old-k8s-version-112124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004377836s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-224095 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-224095 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-224095 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-224095 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-224095 --alsologtostderr -v=3: (12.107960322s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-224095 -n default-k8s-diff-port-224095
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-224095 -n default-k8s-diff-port-224095: exit status 7 (84.953567ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-224095 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-224095 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3
E1217 11:49:28.205515 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:49:31.493039 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/old-k8s-version-112124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:50:12.454366 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/old-k8s-version-112124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-224095 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3: (49.476111196s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-224095 -n default-k8s-diff-port-224095
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-fv28x" [3cae2ea9-cae5-4154-aee9-91d1cf274a4b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002841188s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-fv28x" [3cae2ea9-cae5-4154-aee9-91d1cf274a4b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003360092s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-224095 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-224095 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-224095 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-224095 -n default-k8s-diff-port-224095
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-224095 -n default-k8s-diff-port-224095: exit status 2 (330.875246ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-224095 -n default-k8s-diff-port-224095
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-224095 -n default-k8s-diff-port-224095: exit status 2 (347.4496ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-224095 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-224095 -n default-k8s-diff-port-224095
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-224095 -n default-k8s-diff-port-224095
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-118262 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-118262 --alsologtostderr -v=3: (1.309960299s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-118262 -n no-preload-118262
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-118262 -n no-preload-118262: exit status 7 (69.231713ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-118262 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-669680 --alsologtostderr -v=3
E1217 12:00:43.083107 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-669680 --alsologtostderr -v=3: (1.337576961s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-669680 -n newest-cni-669680
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-669680 -n newest-cni-669680: exit status 7 (67.053855ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-669680 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-669680 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (52.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-348887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-348887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (52.09174966s)
--- PASS: TestNetworkPlugins/group/auto/Start (52.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-348887 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-348887 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-j79mj" [e8160f91-33bb-4859-b856-f0b50742b0c7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-j79mj" [e8160f91-33bb-4859-b856-f0b50742b0c7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004016695s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-348887 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-348887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-348887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (55.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-348887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-348887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (55.458607917s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (55.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-5pjwf" [7e08efa0-d45d-40fe-bb1b-8d56ef894741] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003565152s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-348887 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-348887 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-2krdb" [e0353f81-b097-46ab-b031-e1edf500346f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-2krdb" [e0353f81-b097-46ab-b031-e1edf500346f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.016767826s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-348887 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-348887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-348887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (56.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-348887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-348887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (56.646722977s)
--- PASS: TestNetworkPlugins/group/calico/Start (56.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-x5xh8" [bd2824c5-0164-45d4-9070-4991ccae38c6] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-x5xh8" [bd2824c5-0164-45d4-9070-4991ccae38c6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005317257s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-348887 "pgrep -a kubelet"
I1217 12:11:09.575821 2924574 config.go:182] Loaded profile config "calico-348887": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-348887 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-hqb5w" [52fe1384-005e-4bb2-96c8-1ec4ef5c0457] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-hqb5w" [52fe1384-005e-4bb2-96c8-1ec4ef5c0457] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003976503s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-348887 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-348887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-348887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-348887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-348887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m4.155039041s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-348887 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-348887 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-6qpcr" [59a48c41-59b3-471c-b30f-d6c1ef3e59d4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-6qpcr" [59a48c41-59b3-471c-b30f-d6c1ef3e59d4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003695323s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-348887 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-348887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-348887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (44.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-348887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-348887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (44.065711751s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (44.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-348887 "pgrep -a kubelet"
I1217 12:14:02.459547 2924574 config.go:182] Loaded profile config "enable-default-cni-348887": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-348887 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-wjfdn" [db8fa9a5-bdeb-48ca-930e-36e2fe43cd65] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-wjfdn" [db8fa9a5-bdeb-48ca-930e-36e2fe43cd65] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003904546s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-348887 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-348887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-348887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-348887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-348887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (58.974457366s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (82.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-348887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-348887 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m22.486973765s)
--- PASS: TestNetworkPlugins/group/bridge/Start (82.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-wgld7" [2ca6a8fc-ab77-4c70-92d3-5eb1e04e60cc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003406048s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-348887 "pgrep -a kubelet"
I1217 12:15:37.870284 2924574 config.go:182] Loaded profile config "flannel-348887": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-348887 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-ftrgn" [39268d90-5413-4280-85f9-01f18b4f4b1a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1217 12:15:43.082891 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-ftrgn" [39268d90-5413-4280-85f9-01f18b4f4b1a] Running
E1217 12:15:48.085309 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/auto-348887/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004146227s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-348887 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-348887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-348887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-348887 "pgrep -a kubelet"
I1217 12:16:49.300525 2924574 config.go:182] Loaded profile config "bridge-348887": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-348887 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-5s2x5" [0fd19a33-5c82-4034-8564-30c93bd3b841] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-5s2x5" [0fd19a33-5c82-4034-8564-30c93bd3b841] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.005985958s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-348887 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-348887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-348887 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (38/417)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.3/cached-images 0
15 TestDownloadOnly/v1.34.3/binaries 0
16 TestDownloadOnly/v1.34.3/kubectl 0
23 TestDownloadOnly/v1.35.0-rc.1/cached-images 0
24 TestDownloadOnly/v1.35.0-rc.1/binaries 0
25 TestDownloadOnly/v1.35.0-rc.1/kubectl 0
29 TestDownloadOnlyKic 0.46
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0.01
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
379 TestStartStop/group/disable-driver-mounts 0.16
392 TestNetworkPlugins/group/kubenet 3.51
400 TestNetworkPlugins/group/cilium 3.95
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.46s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-929362 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-929362" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-929362
--- SKIP: TestDownloadOnlyKic (0.46s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-003095" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-003095
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-348887 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-348887

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-348887

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-348887

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-348887

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-348887

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-348887

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-348887

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-348887

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-348887

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-348887

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-348887

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-348887" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-348887" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 11:29:29 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-452067
contexts:
- context:
cluster: kubernetes-upgrade-452067
user: kubernetes-upgrade-452067
name: kubernetes-upgrade-452067
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-452067
user:
client-certificate: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kubernetes-upgrade-452067/client.crt
client-key: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kubernetes-upgrade-452067/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-348887

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-348887"

                                                
                                                
----------------------- debugLogs end: kubenet-348887 [took: 3.344565144s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-348887" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-348887
--- SKIP: TestNetworkPlugins/group/kubenet (3.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-348887 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-348887

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-348887

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-348887

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-348887

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-348887

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-348887

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-348887

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-348887

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-348887

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-348887

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-348887

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-348887" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-348887

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-348887

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-348887

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-348887

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-348887" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-348887" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 11:29:29 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-452067
contexts:
- context:
cluster: kubernetes-upgrade-452067
user: kubernetes-upgrade-452067
name: kubernetes-upgrade-452067
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-452067
user:
client-certificate: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kubernetes-upgrade-452067/client.crt
client-key: /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/kubernetes-upgrade-452067/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-348887

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-348887" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-348887"

                                                
                                                
----------------------- debugLogs end: cilium-348887 [took: 3.776904441s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-348887" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-348887
--- SKIP: TestNetworkPlugins/group/cilium (3.95s)

                                                
                                    
Copied to clipboard